• Title/Summary/Keyword: Critical limit

Search Result 646, Processing Time 0.031 seconds

Calculation of Unit Hydrograph from Discharge Curve, Determination of Sluice Dimension and Tidal Computation for Determination of the Closure curve (단위유량도와 비수갑문 단면 및 방조제 축조곡선 결정을 위한 조속계산)

  • 최귀열
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.7 no.1
    • /
    • pp.861-876
    • /
    • 1965
  • During my stay in the Netherlands, I have studied the following, primarily in relation to the Mokpo Yong-san project which had been studied by the NEDECO for a feasibility report. 1. Unit hydrograph at Naju There are many ways to make unit hydrograph, but I want explain here to make unit hydrograph from the- actual run of curve at Naju. A discharge curve made from one rain storm depends on rainfall intensity per houre After finriing hydrograph every two hours, we will get two-hour unit hydrograph to devide each ordinate of the two-hour hydrograph by the rainfall intensity. I have used one storm from June 24 to June 26, 1963, recording a rainfall intensity of average 9. 4 mm per hour for 12 hours. If several rain gage stations had already been established in the catchment area. above Naju prior to this storm, I could have gathered accurate data on rainfall intensity throughout the catchment area. As it was, I used I the automatic rain gage record of the Mokpo I moteorological station to determine the rainfall lntensity. In order. to develop the unit ~Ydrograph at Naju, I subtracted the basic flow from the total runoff flow. I also tried to keed the difference between the calculated discharge amount and the measured discharge less than 1O~ The discharge period. of an unit graph depends on the length of the catchment area. 2. Determination of sluice dimension Acoording to principles of design presently used in our country, a one-day storm with a frequency of 20 years must be discharged in 8 hours. These design criteria are not adequate, and several dams have washed out in the past years. The design of the spillway and sluice dimensions must be based on the maximun peak discharge flowing into the reservoir to avoid crop and structure damages. The total flow into the reservoir is the summation of flow described by the Mokpo hydrograph, the basic flow from all the catchment areas and the rainfall on the reservoir area. To calculate the amount of water discharged through the sluiceCper half hour), the average head during that interval must be known. This can be calculated from the known water level outside the sluiceCdetermined by the tide) and from an estimated water level inside the reservoir at the end of each time interval. The total amount of water discharged through the sluice can be calculated from this average head, the time interval and the cross-sectional area of' the sluice. From the inflow into the .reservoir and the outflow through the sluice gates I calculated the change in the volume of water stored in the reservoir at half-hour intervals. From the stored volume of water and the known storage capacity of the reservoir, I was able to calculate the water level in the reservoir. The Calculated water level in the reservoir must be the same as the estimated water level. Mean stand tide will be adequate to use for determining the sluice dimension because spring tide is worse case and neap tide is best condition for the I result of the calculatio 3. Tidal computation for determination of the closure curve. During the construction of a dam, whether by building up of a succession of horizontael layers or by building in from both sides, the velocity of the water flowinii through the closing gapwill increase, because of the gradual decrease in the cross sectional area of the gap. 1 calculated the . velocities in the closing gap during flood and ebb for the first mentioned method of construction until the cross-sectional area has been reduced to about 25% of the original area, the change in tidal movement within the reservoir being negligible. Up to that point, the increase of the velocity is more or less hyperbolic. During the closing of the last 25 % of the gap, less water can flow out of the reservoir. This causes a rise of the mean water level of the reservoir. The difference in hydraulic head is then no longer negligible and must be taken into account. When, during the course of construction. the submerged weir become a free weir the critical flow occurs. The critical flow is that point, during either ebb or flood, at which the velocity reaches a maximum. When the dam is raised further. the velocity decreases because of the decrease\ulcorner in the height of the water above the weir. The calculation of the currents and velocities for a stage in the closure of the final gap is done in the following manner; Using an average tide with a neglible daily quantity, I estimated the water level on the pustream side of. the dam (inner water level). I determined the current through the gap for each hour by multiplying the storage area by the increment of the rise in water level. The velocity at a given moment can be determined from the calcalated current in m3/sec, and the cross-sectional area at that moment. At the same time from the difference between inner water level and tidal level (outer water level) the velocity can be calculated with the formula $h= \frac{V^2}{2g}$ and must be equal to the velocity detertnined from the current. If there is a difference in velocity, a new estimate of the inner water level must be made and entire procedure should be repeated. When the higher water level is equal to or more than 2/3 times the difference between the lower water level and the crest of the dam, we speak of a "free weir." The flow over the weir is then dependent upon the higher water level and not on the difference between high and low water levels. When the weir is "submerged", that is, the higher water level is less than 2/3 times the difference between the lower water and the crest of the dam, the difference between the high and low levels being decisive. The free weir normally occurs first during ebb, and is due to. the fact that mean level in the estuary is higher than the mean level of . the tide in building dams with barges the maximum velocity in the closing gap may not be more than 3m/sec. As the maximum velocities are higher than this limit we must use other construction methods in closing the gap. This can be done by dump-cars from each side or by using a cable way.e or by using a cable way.

  • PDF

Influence of Tightening Torque on Implant-Abutment Screw Joint Stability (조임회전력이 임플랜트-지대주 나사 연결부의 안정성에 미치는 영향)

  • Shin, Hyon-Mo;Jeong, Chang-Mo;Jeon, Yonung-Chan;Yun, Mi-Jeong;Yoon, Ji-Hoon
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.46 no.4
    • /
    • pp.396-408
    • /
    • 2008
  • Statement of problem: Within the elastic limit of the screw, the greater the preload, the tighter and more secure the screw joint. However, additional tensile forces can incur plastic deformation of the abutment screw when functional loads are superimposed on preload stresses, and they can elicit the loosening or fracture of the abutment screw. Therefore, it is necessary to find the optimum preload that will maximize fatigue life and simultaneously offer a reasonable degree of protection against loosening. Another critical factor in addition to the applied torque which can affect the amount of preload is the joint connection type between implant and abutment. Purpose: The purpose of this study was to evaluate the influence of tightening torque on the implant-abutment screw joint stability. Material and methods: Respectively, three different amount of tightening torque (20, 30, and 40 Ncm) were applied to implant systems with three different joint connections, one external butt joint and two internal cones. The initial removal torque value and the postload (cyclic loading up to 100,000 cycles) removal torque value of the abutment screw were measured with digital torque gauge. Then rate of the initial and the postload removal torque loss were calculated for the comparison of the effect of tightening torques and joint connection types between implant and abutment on the joint stability. Results and conclusion: 1. Increase in tightening torque value resulted in significant increase in initial and postload removal torque value in all implant systems (P < .05). 2. Initial removal torque loss rates in SS II system were not significantly different when three different tightening torque values were applied (P > .05), however GS II and US II systems exhibited significantly lower loss rates with 40 Ncm torque value than with 20 Ncm (P < .05). 3. In all implant systems, postload removal torque loss rates were lowest when the torque value of 30 Ncm was applied (P < .05). 4. Postload removal torque loss rates tended to increase in order of SS II, GS II and US II system. 5. There was no correlation between initial removal torque value and postload removal torque loss rate (P > .05).

Derivation and Empirical Analysis of Critical Factors that Facilitate Technology Transfer and Commercialization of Research Outcome (연구성과의 기술이전 및 사업화 촉진요인 도출 및 실증분석)

  • Ku, Bon Chul
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.9 no.5
    • /
    • pp.69-81
    • /
    • 2014
  • There is a growing interest in the technology transfer and commercialization both at home and abroad. Accordingly, this study looked at the concept of technology transfer and commercialization, identified the factors that should be taken into account in order to facilitate technology transfer and commercialization, and then performed a empirical analysis. As for the conventional technology transfer and commercialization, there was a tendency to limit its scope to the exploration, transfer and commercialization of technology itself. Here in this research, technology transfer and commercialization is defined the category to expand as various activities implemented in order to make sure that intellectual properties such as intangible technological developments, know-how, and knowledge are transferred between the relevant parties through a contract or negotiation, and the party to which the transfer is made can then further develop and exploit the technology into tangible products and other activities to obtain economic benefit out of that. In addition, the findings of the positive analysis of technology transfer and commercialization revealed that the focus of facilitating technology transfer has been on the technology itself, its management and securing efficiency of the systems and institutions involved in the technology transfer and commercialization. So there was lack of recognition as to the importance of financial support given to the phase of technology commercialization. This indicates that when it comes to the technology transfer and commercialization, quantitative performance has been the focus of interest such as patent application, registration, number of technology transfers, royalty, etc. So there was not enough understanding as to the issues of starting up a business, creating quality jobs through technology transfer and commercialization, which are directly related to the realization of the creative economy. In this regard, this research is expected to be used for the development for the future policies to boost technology transfer and commercialization as it suggests not only simply ensuring quantitative performance but also necessary to create the environment for the creation of the stable ecosystem for the parties involved in the technology transfer and commercialization and then to build circumstances in which creative economy can be realized.

  • PDF

Effect of Additives on Paper Aging (종이 첨가제가 종이의 노화에 미치는 영향)

  • 윤병호;이명구;최경화
    • Journal of Korea Foresty Energy
    • /
    • v.21 no.2
    • /
    • pp.25-33
    • /
    • 2002
  • One of the critical problems to preserve books and documents in libraries and archives is the deterioration. Some of previous results showed that the major cause of paper deterioration was the acid-catalyzed hydrolysis of the cellulose in paper fibres and aging rate of acidic paper was faster than that of alkaline paper. Therefore, It is necessary to remove the acid in the paper for reducing the rate of paper deterioration. It has been reported to extend the useful life of acidic paper by three to five times. Recently, It has been recognized the need for an effective method of deacidifying large quantities of books and document. However, in the previous many reports little attention was paid to the effect of paper additives. In this paper, We carried out experiment about the effect of additives on paper aging and the effect of deacidification by the gaseous ethanolamines (monoehtanolamine, diethanolamine, triehtanolamine). In result, it was found that the strength of aging was in the order of the alum+rosin>alum >AKD> control and the rate of deacidification was in the order of the monoethanolamine>diethanolamine>triethanolamine. The treatment with the gaseous ethanolamines caused decreasing of brightness and dropping of fold endurances. However, deacidification by combination treatment of the various gaseous ehtnaolamines prevented from decreasing of brightness and dropping of folding endurances.

  • PDF

Critical Analyses of '2nd Science Inquiry Experiment Contest' (과학탐구 실험대회의 문제점 분석)

  • Paik, Seoung-Hey
    • Journal of The Korean Association For Science Education
    • /
    • v.15 no.2
    • /
    • pp.173-184
    • /
    • 1995
  • The purpose of this study was to analyse the problems of 'Science Inquiry Experiment Contest(SIEC)' which was one of 8 programs of 'The 2nd Student Science Inquiry Olympic Meet(SSIOM)'. The results and conclusions of this study were as follows: 1. It needs to reconsider the role of practical work within science experiment because practical work skills form one of the mainstays in current science. But the assessment of students' laboratory skills in the contest was made little account of. It is necessary to remind of what it means to be 'good at science'. There are two aspects: knowing and doing. Both are important and, in certain respects, quite distinct. Doing science is more of a craft activity, relying more on craft skill and tacit knowledge than on the conscious application of explicit knowledge. Doing science is also divided into two aspects, 'process' and 'skill' by many science educators. 2. The report's and checklist's assessment items were overlapped. Therefore it was suggested that the checklist assessment items were set limit to the students' acts which can't be found in reports. It is important to identify those activities which produce a permanent assessable product, and those which do not. Skills connected with recording and reporting are likely to produce permanent evidence which can be evaluated after the experiment. Those connected with manipulative skills involving processes are more ephemeral and need to be assessed as they occur. The division of student's experimental skills will contribute to the accurate assess of student's scientific inquiry experimental ability. 3. There was a wide difference among the scores of one participant recorded by three evaluators. This means that there was no concrete discussion among the evaluators before the contest. Despite the items of the checklists were set by preparers of the contest experiments, the concrete discussions before the contest were necessary because students' experimental acts were very diverse. There is a variety of scientific skills. So it is necessary to assess the performance of individual students in a range of skills. But the most of the difficulties in the assessment of skills arise from the interaction between measurement and the use. To overcome the difficulties, not only must the mark needed for each skill be recorded, something which all examination groups obviously need, but also a description of the work that the student did when the skill was assessed must also be given, and not all groups need this. Fuller details must also be available for the purposes of moderation. This is a requirement for all students that there must be provision for samples of any end-product or other tangible form of evidence of candidates' work to be submitted for inspection. This is rather important if one is to be as fair as possible to students because, not only can this work be made available to moderators if necessary, but also it can be used to help in arriving at common standards among several evaluators, and in ensuring consistent standards from one evaluator over the assessment period. This need arises because there are problems associated with assessing different students on the same skill in different activities. 4. Most of the students' reports were assessed intuitively by the evaluators despite the assessment items were established concretely by preparers of the experiment. This result means that the evaluators were new to grasp the essence of the established assessment items of the experiment report and that the students' assessment scores were short of objectivity. Lastly, there are suggestions from the results and the conclusions. The students' experimental acts which were difficult to observe because they occur in a flash and which can be easily imitated should be excluded from the assessment items. Evaluators are likely to miss the time to observe the acts, and the students who are assessed later have more opportunity to practise the skill which is being assessed. It is necessary to be aware of these problems and try to reduce their influence or remove them. The skills and processes analysis has made a very useful checklist for scientific inquiry experiment assessment. But in itself it is of little value. It must be seen alongside the other vital attributes needed in the making of a good scientist, the affective aspects of commitment and confidence, the personal insights which come both through formal and informal learning, and the tacit knowledge that comes through experience, both structured and acquired in play. These four aspects must be continually interacting, in a flexible and individualistic way, throughout the scientific education of students. An increasing ability to be good at science, to be good at doing investigational practical work, will be gained through continually, successively, but often unpredictably, developing more experience, developing more insights, developing more skills, and producing more confidence and commitment.

  • PDF

A Study on the Start-up and Growth Business Model of Small and Medium-Sized Manufacturing Enterprises: Hyunsung Techno (제조기업의 창업과 성장의 비즈니스 모델 연구: 현성테크노)

  • Choi, In-Hyok;Kim, Do-Yeon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.14 no.6
    • /
    • pp.103-117
    • /
    • 2019
  • Under the uncertainties and the consequent turmoils of the IMF financial crisis in Korea, Hyunsung Techno was founded in 1997 on the basis of automobile press molding which is critical for the quality of automobile. Ever since, Hyunsung Techno has grown rapidly based on the domestic market; however, gradually, it had faced a stalemate in terms of the saturation, on the supply side and the growth limit, on the demand side, of the domestic molding market. Accordingly, Hyunsung pushed for a strategy to localize overseas markets and a new acquisition strategy instead of resting on the domestic mold industry's growth, and the success of these strategies enabled it to leap forward into a global company with five companies including affiliates and 70 billion won in sales. The main reason why Hyunsung Techno evolved from a small and medium-sized manufacturing company into a global businesses is due to the success of Boa Constrictor M&A strategy. Its acquisition strategy is not just a successful case of any acquisition, but a rare, maybe the first domestic case of a successful acquisition of a primary supplier by a secondary supplier. Through the success of this strategy, Hyunsung Techno has achieved a continuous growth of businesses, an increase in sales volume, and expansion into new businesses. And on top of that, this achievements is leading it to be a global conglomerate In this study, Hyunsung Techno's success strategy, which is transformed from a small domestic manufacturing company into a global enterprise, was analyzed in detail with its development stages divided into start-up, overseas expansion, acquisitions, and business diversification. Eventually, this case study is meant to offer strategic implications for other small and medium-sized businesses under the current, gloomy economy of low or zero growth of today.

Optimum Sieve-slit width for Effective Removal of Immature Kernels based on Varietal Characteristics of Rice to Improve Milling Efficiency (도정효율 증진을 위한 벼 품종특성별 현미선별체 적정크기)

  • Lee, Choon-Ki;Kim, Jung-Tae;Choi, Yoon-Hee;Lee, Jae-Eun;Seo, Jong-Ho;Kim, Mi-Jung;Jeong, Eung-Gi;Kim, Chung-Kon
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.54 no.4
    • /
    • pp.357-365
    • /
    • 2009
  • On the purpose to improve the milling efficiency as well as head-rice percentage after milling, an experiment to improve the removal ability of immature kernels in the immature brown rice separator (IBRS) was performed focused on varietal characteristics. The removal ability of immature grains by the IBRS was absolutely depending on kernel thickness of brown rice. The kernel thickness of the tested rice varieties distributed from 1.79 mm in Nonganbyeo to 2.16 mm in Daeribbyeo 1. Although there were some variation among rice varieties, it was roughly suggested that the suitable sieve-slit widths for good separation of the immature kernels were 1.9 mm for the varieties thicker than 2.08 mm in thickness, 1.8 mm for the varieties with 2.00-2.08 mm thickness, 1.7 mm for the varieties with 1.90-2.00 mm thickness, and 1.60-1.65 mm for the varieties thinner than 1.7 mm. It was found out that the higher the proportions of immature kernels in brown rice, the more conspicuous the improvement of milling efficiency as well as head rice rates by their removals. With increasing the sieve slit-widths beyond an optimum range, the losses of mature grains increased sharply. For effective separation of immature kernels, it was suggested that the optimum sieve-slit width should be set up depending on both of the kernel thickness and the critical loss limit of mature kernel.

Radiation Absorbed Dose Calculation Using Planar Images after Ho-166-CHICO Therapy (Ho-166-CHICO 치료 후 평면 영상을 이용한 방사선 흡수선량의 계산)

  • 조철우;박찬희;원재환;왕희정;김영미;박경배;이병기
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.155-162
    • /
    • 1998
  • Ho-l66 was produced by neutron reaction in a reactor at the Korea Atomic Energy Institute (Taejon, Korea). Ho-l66 emits a high energy beta particles with a maximum energy of 1.85 MeV and small proportion of gamma rays (80 keV). Therefore, the radiation absorbed dose estimation could be based on the in-vivo quantification of the activity in tumors from the gamma camera images. Approximately 1 mCi of Ho-l66 in solution was mixed into the flood phantom and planar scintigraphic images were acquired with and without patient interposed between the phantom and scintillation camera. Transmission factor over an area of interest was calculated from the ratio of counts in selected regions of the two images described above. A dual-head gamma camera(Multispect2, Siemens, Hoffman Estates, IL, USA) equipped with medium energy collimators was utilized for imaging(80 keV${\pm}$10%). Fifty-nine year old female patient with hepatoma was enrolled into the therapeutic protocol after the informed consent obtained. Thirty millicuries(110MBq) of Ho-166-CHICO was injected into the right hepatic arterial branch supplying hepatoma. When the injection was completed, anterior and posterior scintigraphic views of the chest and pelvic regions were obtained for 3 successive days. Regions of interest (ROIs) were drawn over the organs in both the anterior and posterior views. The activity in those ROIs was estimated from geometric mean, calibration factor and transmission factors. Absorbed dose was calculated using the Marinelli formula and Medical Internal Radiation Dose (MIRD) schema. Tumor dose of the patient treated with 1110 MBq(30 mCi) Ho-l66 was calculated to be 179.7 Gy. Dose distribution to normal liver, spleen, lung and bone was 9.1, 10.3, 3.9, 5.0 % of the tumor dose respectively. In conclusion, tumor dose and absorbed dose to surrounding structures were calculated by daily external imaging after the Ho-l66 therapy for hepatoma. In order to limit the thresholding dose to each surrounding organ, absorbed dose calculation provides useful information.

  • PDF

Beak Trimming Methods - Review -

  • Glatz, P.C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.13 no.11
    • /
    • pp.1619-1637
    • /
    • 2000
  • A review was undertaken to obtain information on the range of beak-trimming methods available or under development. Beak-trimming of commercial layer replacement pullets is a common yet critical management tool that can affect the performance for the life of the flock. The most obvious advantage of beak-trimming is a reduction in cannibalism although the extent of the reduction in cannibalism depends on the strain, season, and type of housing, flock health and other factors. Beak-trimming also improves feed conversion by reducing food wastage. A further advantage of beak-trimming is a reduction in the chronic stress associated with dominance interactions in the flock. Beak-trimming of birds at 7-10 days is favoured by Industry but research over last 10 years has shown that beak-trimming at day-old causes the least stress on birds and efforts are needed to encourage Industry to adopt the practice of beak-trimming birds at day-old. Proper beak-trimming can result in greatly improved layer performance but improper beak-trimming can ruin an other wise good flock of hens. Re-trimming is practiced in most flocks, although there are some flocks that only need one trimming. Given the continuing welfare scrutiny of using a hot blade to cut the beak, attempts have been made to develop more welfare friendly methods of beak-trimming. Despite the developments in design of hot blade beak-trimmers the process has remained largely unchanged. That is, a red-hot blade cuts and cauterises the beak. The variables in the process are blade temperature, cauterisation time, operator ability, severity of trimming, age of trimming, strain of bird and beak length. This method of beak-trimming is still overwhelmingly favoured in Industry and there appears to be no other alternative procedures that are more effective. Sharp secateurs have been used trim the upper beak of both layers and turkeys. Bleeding from the upper mandible ceases shortly after the operation, and despite the regrowth of the beak a reduction of cannibalism has been reported. Very few differences have been noted between behaviour and production of the hot blade and cold blade cut chickens. This method has not been used on a large scale in Industry. There are anecdotal reports of cannibalism outbreaks in birds with regrown beaks. A robotic beak-trimming machine was developed in France, which permitted simultaneous, automated beak-trimming and vaccination of day-old chicks of up to 4,500 chickens per hour. Use of the machine was not successful because if the chicks were not loaded correctly they could drop off the line, receive excessive beak-trimming or very light trimming. Robotic beak-trimming was not effective if there was a variation in the weight or size of chickens. Capsaicin can cause degeneration of sensory nerves in mammals and decreases the rate of beak regrowth by its action on the sensory nerves. Capsaicin is a cheap, non-toxic substance that can be readily applied at the time of less severe beak-trimming. It suffers the disadvantage of causing an extreme burning sensation in operators who come in contact with the substance during its application to the bird. Methods of applying the substance to minimise the risk to operators of coming in contact with capsaicin need to be explored. A method was reported which cuts the beaks with a laser beam in day-old chickens. No details were provided on the type of laser used, or the severity of beak-trimming, but by 16 weeks the beaks of laser trimmed birds resembled the untrimmed beaks, but without the bill tip. Feather pecking and cannibalism during the laying period were highest among the laser trimmed hens. Currently laser machines are available that are transportable and research to investigate the effectiveness of beak-trimming using ablasive and coagulative lasers used in human medicine should be explored. Liquid nitrogen was used to declaw emu toes but was not effective. There was regrowth of the claws and the time and cost involved in the procedure limit the potential of using this process to beak-trim birds.

RGB Channel Selection Technique for Efficient Image Segmentation (효율적인 이미지 분할을 위한 RGB 채널 선택 기법)

  • 김현종;박영배
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1332-1344
    • /
    • 2004
  • Upon development of information super-highway and multimedia-related technoiogies in recent years, more efficient technologies to transmit, store and retrieve the multimedia data are required. Among such technologies, firstly, it is common that the semantic-based image retrieval is annotated separately in order to give certain meanings to the image data and the low-level property information that include information about color, texture, and shape Despite the fact that the semantic-based information retrieval has been made by utilizing such vocabulary dictionary as the key words that given, however it brings about a problem that has not yet freed from the limit of the existing keyword-based text information retrieval. The second problem is that it reveals a decreased retrieval performance in the content-based image retrieval system, and is difficult to separate the object from the image that has complex background, and also is difficult to extract an area due to excessive division of those regions. Further, it is difficult to separate the objects from the image that possesses multiple objects in complex scene. To solve the problems, in this paper, I established a content-based retrieval system that can be processed in 5 different steps. The most critical process of those 5 steps is that among RGB images, the one that has the largest and the smallest background are to be extracted. Particularly. I propose the method that extracts the subject as well as the background by using an Image, which has the largest background. Also, to solve the second problem, I propose the method in which multiple objects are separated using RGB channel selection techniques having optimized the excessive division of area by utilizing Watermerge's threshold value with the object separation using the method of RGB channels separation. The tests proved that the methods proposed by me were superior to the existing methods in terms of retrieval performances insomuch as to replace those methods that developed for the purpose of retrieving those complex objects that used to be difficult to retrieve up until now.