• Title/Summary/Keyword: Conventional system

Search Result 14,428, Processing Time 0.046 seconds

Determination of Petroleum Aromatic Hydrocarbons in Seawater Using Headspace Solid-Phase Microextraction Coupled to Gas Chromatography/Mass Spectrometry (HS-SPME-GC/MS를 이용한 해수 내 유류계 방향족탄화수소 분석법)

  • An, Joon Geon;Shim, Won Joon;Ha, Sung Yong;Yim, Un Hyuk
    • Journal of the Korean Society for Marine Environment & Energy
    • /
    • v.17 no.1
    • /
    • pp.27-35
    • /
    • 2014
  • The headspace solid-phase microextraction (HS-SPME) followed by gas chromatography/mass spectrometry procedure has been developed for the simultaneous determination of petroleum aromatic hydrocarbons such as benzene, toluene, ethylbenzene and xylene isomers (BTEX) and polycyclic aromatic hydrocarbons (PAHs) in seawater. The advantages of SPME compared to traditional methods of sample preparation are ease of operation, reuse of fiber, portable system, minimal contamination and loss of the sample during transport and storage. SPME fiber, extraction time, temperature, stirring speed, and GC desorption time were key extraction parameters considered in this study. Among three kinds of SPME fibers, i.e., PDMS ($100{\mu}m$), CAR/PDMS ($75{\mu}m$), and PDMS/DVB ($65{\mu}m$), a $65{\mu}m$ PDMS/DVB fiber showed the most optimal extraction efficiencies covering molecular weight ranging from 78 to 202. Other extraction parameters were set up using $65{\mu}m$ PDMS/DVB. The final optimized extraction conditions were extraction time (60 min), extraction temperature (50), stirring speed (750 rpm) and GC desorption time (3 min). When applied to artificially contaminated seawater like water accommodated fraction, our optimized HS-SPME-GC/MS showed comparable performances with other conventional method. The proposed protocol can be an attractive alternative to analysis of BTEX and PAHs in seawater.

A historical study on the flexibility square-format typeface and the prospects - Focused on the three-pairs fonts of hangeul - (탈네모글꼴에 관한 역사적 연구와 전망 - 세벌식 한글 글꼴을 중심으로 -)

  • Yu, Jeong-Mi
    • Archives of design research
    • /
    • v.19 no.2 s.64
    • /
    • pp.241-250
    • /
    • 2006
  • Hangeul as the Korean unique characters were invented according to some character-making principles and based on scholars' exhaustive researches. While most of the characters in the world evolved naturally, Hangeul was invented based on a precise linguistic analysis of the time, and therefore, it is most scientific and reasonable among various characters throughout the world. Nevertheless, Hangeul typeface designs do not seem to inherit the ideology of scientific and reasonable Hangeul correctly. For the square forms have been used intact due to the influences from the Chinese characters which prevailed during the time. If a single set of square characters should be designed, as much as 11,172 fonts should be designed, which suggests that advantages of Mangeul may not well be used fully; Hangeul was invented to visualize every sound with the combinations of 28 vowels and consonants. Problems of such square fonts began to be identified since 1900's when typewriters were introduced first from the West. Since a typewriter is designed with 28 characters laid out on its keyboard by using such combinations, the letters may be easily combined on it. The so-called the flexibility square-format typeface was born as such. Specially, the three-pairs fonts of these can be combined up to 67 letters including vowels and consonants. The three-pairs fonts system can help to solve the problems arising form the conventional square fonts and inherit the original ideology of Hangeul invention. This study aims to review the history of the three-pairs fonts designs facilitated by mechanic encoding of Hangeul and thereupon, suggest some desirable directions for future Hangeul fonts. Since the flexibility square-format typeface is expected to evolve more and more owing to development of the digital technology, they would serve our age of information in terms of both functions and convenience. Just as Hunminjongum tried to be literally independent from the Chinese characters, so the flexibility square-format typeface designs would serve to recover identity of our Hangeul font designs.

  • PDF

A Study on the Impact Factors of Contents Diffusion in Youtube using Integrated Content Network Analysis (일반영향요인과 댓글기반 콘텐츠 네트워크 분석을 통합한 유튜브(Youtube)상의 콘텐츠 확산 영향요인 연구)

  • Park, Byung Eun;Lim, Gyoo Gun
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.3
    • /
    • pp.19-36
    • /
    • 2015
  • Social media is an emerging issue in content services and in current business environment. YouTube is the most representative social media service in the world. YouTube is different from other conventional content services in its open user participation and contents creation methods. To promote a content in YouTube, it is important to understand the diffusion phenomena of contents and the network structural characteristics. Most previous studies analyzed impact factors of contents diffusion from the view point of general behavioral factors. Currently some researchers use network structure factors. However, these two approaches have been used separately. However this study tries to analyze the general impact factors on the view count and content based network structures all together. In addition, when building a content based network, this study forms the network structure by analyzing user comments on 22,370 contents of YouTube not based on the individual user based network. From this study, we re-proved statistically the causal relations between view count and not only general factors but also network factors. Moreover by analyzing this integrated research model, we found that these factors affect the view count of YouTube according to the following order; Uploader Followers, Video Age, Betweenness Centrality, Comments, Closeness Centrality, Clustering Coefficient and Rating. However Degree Centrality and Eigenvector Centrality affect the view count negatively. From this research some strategic points for the utilizing of contents diffusion are as followings. First, it is needed to manage general factors such as the number of uploader followers or subscribers, the video age, the number of comments, average rating points, and etc. The impact of average rating points is not so much important as we thought before. However, it is needed to increase the number of uploader followers strategically and sustain the contents in the service as long as possible. Second, we need to pay attention to the impacts of betweenness centrality and closeness centrality among other network factors. Users seems to search the related subject or similar contents after watching a content. It is needed to shorten the distance between other popular contents in the service. Namely, this study showed that it is beneficial for increasing view counts by decreasing the number of search attempts and increasing similarity with many other contents. This is consistent with the result of the clustering coefficient impact analysis. Third, it is important to notice the negative impact of degree centrality and eigenvector centrality on the view count. If the number of connections with other contents is too much increased it means there are many similar contents and eventually it might distribute the view counts. Moreover, too high eigenvector centrality means that there are connections with popular contents around the content, and it might lose the view count because of the impact of the popular contents. It would be better to avoid connections with too powerful popular contents. From this study we analyzed the phenomenon and verified diffusion factors of Youtube contents by using an integrated model consisting of general factors and network structure factors. From the viewpoints of social contribution, this study might provide useful information to music or movie industry or other contents vendors for their effective contents services. This research provides basic schemes that can be applied strategically in online contents marketing. One of the limitations of this study is that this study formed a contents based network for the network structure analysis. It might be an indirect method to see the content network structure. We can use more various methods to establish direct content network. Further researches include more detailed researches like an analysis according to the types of contents or domains or characteristics of the contents or users, and etc.

Finding Weighted Sequential Patterns over Data Streams via a Gap-based Weighting Approach (발생 간격 기반 가중치 부여 기법을 활용한 데이터 스트림에서 가중치 순차패턴 탐색)

  • Chang, Joong-Hyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.55-75
    • /
    • 2010
  • Sequential pattern mining aims to discover interesting sequential patterns in a sequence database, and it is one of the essential data mining tasks widely used in various application fields such as Web access pattern analysis, customer purchase pattern analysis, and DNA sequence analysis. In general sequential pattern mining, only the generation order of data element in a sequence is considered, so that it can easily find simple sequential patterns, but has a limit to find more interesting sequential patterns being widely used in real world applications. One of the essential research topics to compensate the limit is a topic of weighted sequential pattern mining. In weighted sequential pattern mining, not only the generation order of data element but also its weight is considered to get more interesting sequential patterns. In recent, data has been increasingly taking the form of continuous data streams rather than finite stored data sets in various application fields, the database research community has begun focusing its attention on processing over data streams. The data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. In data stream processing, each data element should be examined at most once to analyze the data stream, and the memory usage for data stream analysis should be restricted finitely although new data elements are continuously generated in a data stream. Moreover, newly generated data elements should be processed as fast as possible to produce the up-to-date analysis result of a data stream, so that it can be instantly utilized upon request. To satisfy these requirements, data stream processing sacrifices the correctness of its analysis result by allowing some error. Considering the changes in the form of data generated in real world application fields, many researches have been actively performed to find various kinds of knowledge embedded in data streams. They mainly focus on efficient mining of frequent itemsets and sequential patterns over data streams, which have been proven to be useful in conventional data mining for a finite data set. In addition, mining algorithms have also been proposed to efficiently reflect the changes of data streams over time into their mining results. However, they have been targeting on finding naively interesting patterns such as frequent patterns and simple sequential patterns, which are found intuitively, taking no interest in mining novel interesting patterns that express the characteristics of target data streams better. Therefore, it can be a valuable research topic in the field of mining data streams to define novel interesting patterns and develop a mining method finding the novel patterns, which will be effectively used to analyze recent data streams. This paper proposes a gap-based weighting approach for a sequential pattern and amining method of weighted sequential patterns over sequence data streams via the weighting approach. A gap-based weight of a sequential pattern can be computed from the gaps of data elements in the sequential pattern without any pre-defined weight information. That is, in the approach, the gaps of data elements in each sequential pattern as well as their generation orders are used to get the weight of the sequential pattern, therefore it can help to get more interesting and useful sequential patterns. Recently most of computer application fields generate data as a form of data streams rather than a finite data set. Considering the change of data, the proposed method is mainly focus on sequence data streams.

Results of Radiotherapy in Nasopharyngeal Cancer (비인두암의 방사선치료 결과)

  • Shin Byung Chul;Ma Sun Young;Moon Chang Woo;Yum Ha Yong;Jeung Tae Sig;Yoo Myung Jin
    • Radiation Oncology Journal
    • /
    • v.13 no.3
    • /
    • pp.215-223
    • /
    • 1995
  • Purpose : The aim of this study was to assess the effectiveness, survival rate and complication of radiation in nasopharyngeal cancer. Materials and Methods : From January 1980 to May 1989. Fifty patients who had nasopharyngeal carcinoma treated with curative radiation therapy at Kosin Medical Center were retrospectively studied. Thirty seven patients($74{\%}$) were treated with radiation therapy alone(Group I) and 13 patients ($26{\%}$) treated with combination of chemotherapy and radiation (Group II). Age distribution was 16-75 years(median : 45.8 years). In histologic type, squamous cell carcinoma was in 30 patients($60{\%}$), undifferentiated carcinoma in 17 patients($34{\%}$), and lymphoepithelioma in 3 patients($6{\%}$). According t AJCC staging system. 4 patients($8{\%}$) were in $T_1$, 13 patients($26{\%}$) in $T_2$. 20 patients($40{\%}$) in $T_3$, 13 patients($26{\%}$) in $T_4$ and 7 patients($14{\%}$) in $N_0$, 6 patients($12{\%}$) $N_1$, 23 patients($46{\%}$) in $N_2$, 14 patients($28{\%}$) in $N_3$. Total radiation dose ranges were 5250-9200cGy(median : 7355 cGy) in Group I and 5360-8400cGy(median : 6758cGy) in Group II Radiotherapy on 4-6MV linear accelerator and/or 6-12MeV electron in boost radiation was given with conventional technique to 26 patients($52{\%}$), with hyperfractionation(115-120cGy/fr., 2times/day) to 16 patients($32{\%}$), with accelerated fractionation(160cGy/fr., 2 times/day) to 8 patients($16{\%}$). In chemotherapy, 5 FU 1000mg daily for 5 consecutive days, pepleomycin 10mg on days 1 and 3, and cisplatin 100mg on day 1 were administered with 3weeks interval, total 1 to 3 cycles(average 1.8cycles) prior to radiation therapy. Follow up duration was 6-140 months(mean : 58 months). Statistics was calculated with Chi-square and Fisher's exact test. Results : Complete local control rates in Group I and II were $75.7{\%},\;69.2{\%} Overall 5 year survival rates in Group I and II were $56.8{\%},\;30.8{\%}$. Five year survival rates by histologic type in Group I and II were $52.2{\%},\;14.3{\%}$ is squamous cell carcinoma and $54.5{\%},\;50{\%}$ in undifferentiated carcinoma. Survival rates in Group I were superior to those of Group II though there were not statistically significant. In both group, survival rates seem to be increased according to increasing total dose of radiation up to 7500cGy, but not increased beyond it. There were not statistically significant differences in survival rates by age, stage, and radiation techniques in both group. Twenty four patients($48{\%}$) experienced treatment failures. Complications were found in 12 patients($24{\%}$). The most common one was osteomyelitis(4 patients, $33.3{\%}$) involving mandible (3 patients) and maxilla(1 patient). Conclusion : Chemotherapy in combination with radiotherapy was found to be not effective to nasopharyngeal cancer and the survival rate was also inferior to that of radiation alone group though it was statistically not significant due to small population in chemotherapy combined group.

  • PDF

Evaluation of Usefulness of Portal Image Using Electronic Portal Imaging Device (EPID) in the Patients Who Received Pelvic Radiation Therapy (골반강 내 방사선 치료 환자에서 Electronic Portal Imaging Device(EPID)를 이용한 Portal Image의 유용성에 관한 연구)

  • Kim Woo Chul;Park Won;Kim Heon Jong;Park Seong Young;Cho Young Kap;Loh John J;Suh Chang Ok;Kim Gwi Eon
    • Radiation Oncology Journal
    • /
    • v.16 no.4
    • /
    • pp.497-504
    • /
    • 1998
  • Purpose : To evaluate the usefulness of electronic portal imaging device through objective compare of the images acquired using an EPID and a conventional port film Materials and Methods : From Apr. to Oct. 1997, a total of 150 sets of images from 20 patients who received radiation therapy in the pelvis area were evaluated in the Inha University Hospital and Severance Hospital. A dual image recording technique was devised to obtain both electronic portal images and port film images simultaneously with one treatment course. We did not perform double exposure five to ten images were acquired from each patient. All images were acquired from posteroanterior (PA) view except images from two patients. A dose rate of 100-300 Mu/min and a 10-MV X-ray beam were used and 2-10 MUs were required to produce a verification image during treatment. Kodak diagnostic film with metal/film imaging cassette which was located on the top of the EPID detector was used for the port film. The source to detector distance was 140 cm. Eight anatomical landmarks (pelvic brim, sacrum, acetabulum. iliopectineal line, symphysis, ischium, obturator foramen, sacroiliac joint) were assessed. Four radiation oncologist joined to evaluate each image. The individual landmarks in the port film or in the EPID were rated - very clear (1), clear (2), visible (3), not clear (4), not visible (5). Results : Using an video camera based EPID system. there was no difference of image quality between no enhanced EPID images and port film images. However, when we provided some change with window level for the portal image, the visibility of the sacrum and obturator foramen was improved in the portal images than in the port film images. All anatomical landmarks were more visible in the portal images than in the port film when we applied the CLAHE mode enhancement. The images acquired using an matrix ion chamber type EPID were also improved image qualify after window level adjustment. Conclusion : The quality of image acquired using an electronic portal imaging device was comparable to that of the port film. When we used the enhance mode or window level adjustment. the image quality of the EPID was superior to that of the port film. EPID may replace the port film.

  • PDF

Legislative Study on the Mitigation of the Burden of Proof in Hospital Infection Cases - Focusing on the revised Bürgerliches Gesetzbuch - (병원감염 사건에서 증명책임 완화에 관한 입법적 고찰 - 개정 독일민법을 중심으로 -)

  • Yoo, Hyun Jung
    • The Korean Society of Law and Medicine
    • /
    • v.16 no.2
    • /
    • pp.159-193
    • /
    • 2015
  • Owing to causes such as population aging, increased use of various medical devices, long-term hospitalization of various patients with reduced immune function such as cancer, diabetes, and organ transplant patients, and the growing size of hospitals, hospital infections are continuing to increase. As seen in the MERS crisis of 2015, hospital infections have become a social and national problem. In order to prevent damage due to such hospital infections, it is necessary to first strictly implement measures to prevent hospital infections, while, on the other hand, providing proper relief of damage suffered due to hospital infections. However, the mainstream attitude of judicial precedents relating to hospital infection cases has been judged to in fact shift responsibility over damages due to hospital infections on the patient. In light of the philosophy of the damage compensation system, whose guiding principle if the fair and proper apportionment of damages, there is a need to seek means of drastically relaxing the burden of proof on the patient's side relative to conventional legal principles for relaxing the burden of proof, or the theory of de facto estimation. In relation to such need, the German civil code (Burgerliches Gesetzbuch), which defines contracts of medical treatment as typical contracts under the civil code, and has presumption of negligence provisions stipulating that, in cases such as hospital infections which were completely under the control of the medical care providers, if risks in general medical treatment have been realized which cause violations of the life, body, or health of patients, error on the part of the person providing medical care is presumed, was examined. Contracts of medical treatment are entered into very frequently and broadly in the everyday lives of the general public, with various disputes owing thereto arising. Therefore, it is necessary to, by defining contracts of medical treatment as typical contracts under the civil code, regulate the content of said contracts, as well as the proof of burden when disputes arise. If stipulations in the civil code are premature as of yet, an option may be to regulate through a special act, as is the case with France. In the case of hospital infection cases, it is thought that 'legal presumption of negligence' relating to 'negligence in the occurrence of hospital infections,' which will create a state close to equality of arms, will aid the resolution of the realistic issue of the de facto impossibility of remedying damages occurring due to negligence in the process of occurrence of hospital infections. Also, even if negligence is presumed by law, as the patient side is burdened with proving the causal relationships, such drastic confusion as would occur if the medical care provider side is found fully liable if a hospital infection occurs may be avoided. It is thought that, alongside such efforts, social insurance policy must be improved so as to cover the expenses of medical institutions having strictly implemented efforts to prevent hospital infections in the event that they have suffered damages due to a hospital infection accident, and that close future research and examination into this matter will be required.

  • PDF

Increase of Tc-99m RBC SPECT Sensitivity for Small Liver Hemangioma using Ordered Subset Expectation Maximization Technique (Tc-99m RBC SPECT에서 Ordered Subset Expectation Maximization 기법을 이용한 작은 간 혈관종 진단 예민도의 향상)

  • Jeon, Tae-Joo;Bong, Jung-Kyun;Kim, Hee-Joung;Kim, Myung-Jin;Lee, Jong-Doo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.6
    • /
    • pp.344-356
    • /
    • 2002
  • Purpose: RBC blood pool SPECT has been used to diagnose focal liver lesion such as hemangioma owing to its high specificity. However, low spatial resolution is a major limitation of this modality. Recently, ordered subset expectation maximization (OSEM) has been introduced to obtain tomographic images for clinical application. We compared this new modified iterative reconstruction method, OSEM with conventional filtered back projection (FBP) in imaging of liver hemangioma. Materials and Methods: Sixty four projection data were acquired using dual head gamma camera in 28 lesions of 24 patients with cavernous hemangioma of liver and these raw data were transferred to LINUX based personal computer. After the replacement of header file as interfile, OSEM was performed under various conditions of subsets (1,2,4,8,16, and 32) and iteration numbers (1,2,4,8, and 16) to obtain the best setting for liver imaging. The best condition for imaging in our investigation was considered to be 4 iterations and 16 subsets. After then, all the images were processed by both FBP and OSEM. Three experts reviewed these images without any information. Results: According to blind review of 28 lesions, OSEM images revealed at least same or better image quality than those of FBP in nearly all cases. Although there showed no significant difference in detection of large lesions more than 3 cm, 5 lesions with 1.5 to 3 cm in diameter were detected by OSEM only. However, both techniques failed to depict 4 cases of small lesions less than 1.5 cm. Conclusion: OSEM revealed better contrast and define in depiction of liver hemangioma as well as higher sensitivity in detection of small lesions. Furthermore this reconstruction method dose not require high performance computer system or long reconstruction time, therefore OSEM is supposed to be good method that can be applied to RBC blood pool SPECT for the diagnosis of liver hemangioma.

Bankruptcy Prediction Modeling Using Qualitative Information Based on Big Data Analytics (빅데이터 기반의 정성 정보를 활용한 부도 예측 모형 구축)

  • Jo, Nam-ok;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.33-56
    • /
    • 2016
  • Many researchers have focused on developing bankruptcy prediction models using modeling techniques, such as statistical methods including multiple discriminant analysis (MDA) and logit analysis or artificial intelligence techniques containing artificial neural networks (ANN), decision trees, and support vector machines (SVM), to secure enhanced performance. Most of the bankruptcy prediction models in academic studies have used financial ratios as main input variables. The bankruptcy of firms is associated with firm's financial states and the external economic situation. However, the inclusion of qualitative information, such as the economic atmosphere, has not been actively discussed despite the fact that exploiting only financial ratios has some drawbacks. Accounting information, such as financial ratios, is based on past data, and it is usually determined one year before bankruptcy. Thus, a time lag exists between the point of closing financial statements and the point of credit evaluation. In addition, financial ratios do not contain environmental factors, such as external economic situations. Therefore, using only financial ratios may be insufficient in constructing a bankruptcy prediction model, because they essentially reflect past corporate internal accounting information while neglecting recent information. Thus, qualitative information must be added to the conventional bankruptcy prediction model to supplement accounting information. Due to the lack of an analytic mechanism for obtaining and processing qualitative information from various information sources, previous studies have only used qualitative information. However, recently, big data analytics, such as text mining techniques, have been drawing much attention in academia and industry, with an increasing amount of unstructured text data available on the web. A few previous studies have sought to adopt big data analytics in business prediction modeling. Nevertheless, the use of qualitative information on the web for business prediction modeling is still deemed to be in the primary stage, restricted to limited applications, such as stock prediction and movie revenue prediction applications. Thus, it is necessary to apply big data analytics techniques, such as text mining, to various business prediction problems, including credit risk evaluation. Analytic methods are required for processing qualitative information represented in unstructured text form due to the complexity of managing and processing unstructured text data. This study proposes a bankruptcy prediction model for Korean small- and medium-sized construction firms using both quantitative information, such as financial ratios, and qualitative information acquired from economic news articles. The performance of the proposed method depends on how well information types are transformed from qualitative into quantitative information that is suitable for incorporating into the bankruptcy prediction model. We employ big data analytics techniques, especially text mining, as a mechanism for processing qualitative information. The sentiment index is provided at the industry level by extracting from a large amount of text data to quantify the external economic atmosphere represented in the media. The proposed method involves keyword-based sentiment analysis using a domain-specific sentiment lexicon to extract sentiment from economic news articles. The generated sentiment lexicon is designed to represent sentiment for the construction business by considering the relationship between the occurring term and the actual situation with respect to the economic condition of the industry rather than the inherent semantics of the term. The experimental results proved that incorporating qualitative information based on big data analytics into the traditional bankruptcy prediction model based on accounting information is effective for enhancing the predictive performance. The sentiment variable extracted from economic news articles had an impact on corporate bankruptcy. In particular, a negative sentiment variable improved the accuracy of corporate bankruptcy prediction because the corporate bankruptcy of construction firms is sensitive to poor economic conditions. The bankruptcy prediction model using qualitative information based on big data analytics contributes to the field, in that it reflects not only relatively recent information but also environmental factors, such as external economic conditions.

Recent research activities on hybrid rocket in Japan

  • Harunori, Nagata
    • Proceedings of the Korean Society of Propulsion Engineers Conference
    • /
    • 2011.04a
    • /
    • pp.1-2
    • /
    • 2011
  • Hybrid rockets have lately attracted attention as a strong candidate of small, low cost, safe and reliable launch vehicles. A significant topic is that the first commercially sponsored space ship, SpaceShipOne vehicle chose a hybrid rocket. The main factors for the choice were safety of operation, system cost, quick turnaround, and thrust termination. In Japan, five universities including Hokkaido University and three private companies organized "Hybrid Rocket Research Group" from 1998 to 2002. Their main purpose was to downsize the cost and scale of rocket experiments. In 2002, UNISEC (University Space Engineering Consortium) and HASTIC (Hokkaido Aerospace Science and Technology Incubation Center) took over the educational and R&D rocket activities respectively and the research group dissolved. In 2008, JAXA/ISAS and eleven universities formed "Hybrid Rocket Research Working Group" as a subcommittee of the Steering Committee for Space Engineering in ISAS. Their goal is to demonstrate technical feasibility of lowcost and high frequency launches of nano/micro satellites into sun-synchronous orbits. Hybrid rockets use a combination of solid and liquid propellants. Usually the fuel is in a solid phase. A serious problem of hybrid rockets is the low regression rate of the solid fuel. In single port hybrids the low regression rate below 1 mm/s causes large L/D exceeding a hundred and small fuel loading ratio falling below 0.3. Multi-port hybrids are a typical solution to solve this problem. However, this solution is not the mainstream in Japan. Another approach is to use high regression rate fuels. For example, a fuel regression rate of 4 mm/s decreases L/D to around 10 and increases the loading ratio to around 0.75. Liquefying fuels such as paraffins are strong candidates for high regression fuels and subject of active research in Japan too. Nakagawa et al. in Tokai University employed EVA (Ethylene Vinyl Acetate) to modify viscosity of paraffin based fuels and investigated the effect of viscosity on regression rates. Wada et al. in Akita University employed LTP (Low melting ThermoPlastic) as another candidate of liquefying fuels and demonstrated high regression rates comparable to paraffin fuels. Hori et al. in JAXA/ISAS employed glycidylazide-poly(ethylene glycol) (GAP-PEG) copolymers as high regression rate fuels and modified the combustion characteristics by changing the PEG mixing ratio. Regression rate improvement by changing internal ballistics is another stream of research. The author proposed a new fuel configuration named "CAMUI" in 1998. CAMUI comes from an abbreviation of "cascaded multistage impinging-jet" meaning the distinctive flow field. A CAMUI type fuel grain consists of several cylindrical fuel blocks with two ports in axial direction. The port alignment shifts 90 degrees with each other to make jets out of ports impinge on the upstream end face of the downstream fuel block, resulting in intense heat transfer to the fuel. Yuasa et al. in Tokyo Metropolitan University employed swirling injection method and improved regression rates more than three times higher. However, regression rate distribution along the axis is not uniform due to the decay of the swirl strength. Aso et al. in Kyushu University employed multi-swirl injection to solve this problem. Combinations of swirling injection and paraffin based fuel have been tried and some results show very high regression rates exceeding ten times of conventional one. High fuel regression rates by new fuel, new internal ballistics, or combination of them require faster fuel-oxidizer mixing to maintain combustion efficiency. Nakagawa et al. succeeded to improve combustion efficiency of a paraffin-based fuel from 77% to 96% by a baffle plate. Another effective approach some researchers are trying is to use an aft-chamber to increase residence time. Better understanding of the new flow fields is necessary to reveal basic mechanisms of regression enhancement. Yuasa et al. visualized the combustion field in a swirling injection type motor. Nakagawa et al. observed boundary layer combustion of wax-based fuels. To understand detailed flow structures in swirling flow type hybrids, Sawada et al. (Tohoku Univ.), Teramoto et al. (Univ. of Tokyo), Shimada et al. (ISAS), and Tsuboi et al. (Kyushu Inst. Tech.) are trying to simulate the flow field numerically. Main challenges are turbulent reaction, stiffness due to low Mach number flow, fuel regression model, and other non-steady phenomena. Oshima et al. in Hokkaido University simulated CAMUI type flow fields and discussed correspondence relation between regression distribution of a burning surface and the vortex structure over the surface.

  • PDF