In this paper, we assess the potential for rehabilitation of comparative analysis under its new guise of benchmarking. After a brief description of comparative analysis, we discuss the deficiencies that surrounded its fall in reputation: neglect of economic principles, limited scope for action, failure to establish causal relations between farming practices and performance, lack of a holistic approach and failure to take account of production risk. Each of these deficiencies is diagnosed, and it is argued that they can be overcome through the careful selection of farm performance criteria and use of long-established and recent methods of efficiency and productivity analysis.
The case is put for widespread application by benchmarkers of recently developed methods of efficiency and productivity analysis. These methods have so far remained almost wholly in the province of research. If successful, their application would enable a benchmarker to examine economic efficiency and its components over many variables by using frontiers to capture the complex relationships between several inputs and several outputs. This form of analysis is useful where farm inputs are not monotonic and where both substitute and complementary relationships exist between them. Examples are provided from benchmarking case studies that show progress has been made in some but not all areas of concern. Regardless of the progress made in methodology, skilled and experienced benchmarkers familiar with the data are needed to interpret and apply results.
The practice of benchmarking has been developed as a farm management tool for detecting areas where individual producers could increase net operating profit by adopting the methods of their peers who are able to achieve better results. Use of the term, benchmarking, is a relatively recent occurrence; the early form of benchmarking was called comparative analysis (or, less commonly, account analysis).
Barnard and Nix (1979) described comparative analysis as the opposite of, and an advance on, cost accounting for decision making on the farm. (Their main criticism of cost accounting in this context was that each enterprise or, our preferred term, activity is treated as a self-contained business.) They summed up comparative analysis concisely when they said that it ‘emphasises the integrated nature of the farm business and its essence lies in calculating various “efficiency factors” or “indices” to compare with standards (average or “premium” figures) obtained from other, similar farms’ (Barnard and Nix 1979, p. 524). Although written over a quarter of a century ago, this definition adequately describes the usual benchmarking approach applied in Australian agriculture today.
Outputs and costs are usually calculated for comparative analysis on a per hectare basis, or sometimes on the basis of some other factor of production such as labour. Calculations incorporate adjustments for opening and closing values, and the addition of non-cash items of receipts and payments. Net output figures are used to account for internal transfers between activities, such as feed produced from crops that is used in livestock production (Barnard and Nix 1979, p. 527).
Around the time that Barnard and Nix (1979) and many other critics were writing, it was becoming clear that comparative analysis had some major inadequacies as a decision support tool for farmers. Five broad criticisms were particularly damaging to its reputation:
We now outline these inadequacies and present a case for making its reincarnation in the guise of benchmarking as a legitimate tool in agribusiness analysis today.
The most damaging criticism of comparative analysis was its neglect of economic principles. If it is assumed that the major objective of the farm is to maximise profit, comparative analysis should reflect sound economic principles of optimal resource allocation if it is to have value as a decision-making tool. Yet traditional comparative analyses have little to say on this key issue. Malcolm (2004, p. 396) lambasted research and development organisations in Australia that ‘have invested substantial funds in conducting large scale “average benchmarking” or comparative analysis studies with on-farm diagnostic and prescriptive intent’. State departments of agriculture were also the targets of his criticism, in that they ‘have invested large amounts of resources over long periods of time conducting comparative analysis for farm management’ with little payoff.
Malcolm (2004, p. 401) posited that economics is the core discipline of farm management, meaning that ‘the discipline organises the practically obtainable relevant information about a question or series of questions into a framework and form which enables an informed, reasoned, rational choice to be made between alternative actions faced by management’. Without its contribution, results from comparative analysis have little prescriptive value.
In comparative analysis, producers were typically categorised into fractiles according to a given performance measure, such as in the top quartile, bottom quartile, top half or middle 50 per cent of producers. Farmers who were above average had little to learn from any comparisons with other farmers in the sample. They had even less to learn when they were told that they lay in the band of the top 25 per cent (or an even smaller proportion) of farms. Such crude rankings provided little help in diagnosing producers’ problems and providing targeted management advice.
A comparative analysis of farms was typically focused on a series of partial performance measures. Individual producers were categorised according to their relative standing across all producers included in the sample according to each partial measure, such as yield or stocking rate. A chronic problem with this approach was the absence of a standard against which to measure the farm performance of each producer.
A more serious problem identified with partial performance measures was that they conveyed information on only one, often small, part of farm performance. A more comprehensive measure was needed to get an accurate picture of whole-farm performance. The approach that came closest to achieving this aim was to rank producers according to their overall net operating profit, which was a comprehensive measure of performance. However, by itself it did not convey information about the relative performance of each producer for benchmarking purposes. Some producers were likely to have many more resources at their disposal than other farmers, so such a comparison would show farmers who operate on a small scale to be less profitable, and hence performing less well, when that might not be the case.
This problem led analysts to scale profit according to one or more of the resources that producers use on their farm. Each measure provided a single profile of producer profitability according to the level of use of one farm input. The most popular such measure used was profit per hectare of land; another was profit per man-day of labour. Two other common measures related to return on capital: operating profit as a percentage of equity and operating profit per dry sheep-equivalent. Comparisons across producers were still invidious because producers undertook different farm activities to varying degrees.
The difficulty (some would say, with some justification, impossibility) of allocating overhead operating costs across a number of different farm activities led to a ‘short-cut’ measure being used for profit, namely gross margin. Performance measures used as a consequence of following this approach include gross margin per hectare and gross margin per man-day of labour.
There was still a major problem of partial profit measures of performance, as demonstrated in the following simple example for a single farm activity. A producer with a relatively high gross margin per hectare may be using many more non-land resources than another producer who has a relatively low gross margin per hectare. Of course, analysts could then look at the other partial gross margin measures and, indeed, may have found that the second farmer had a much higher gross margin per man-day than the first producer. They could continue this approach until gross margin rankings by producer were obtained against all possible farm inputs (even gross margin per dollar of a particular chemical used if they wished).
It then might be possible to say something about their relative performances if one producer out-performed another producer in all measures. But this is unlikely, and not very useful when measuring relative farm performance across many producers and trying to develop prescriptions for improved farm performance.
An area of omission in comparative analysis is its neglect of the uncertain and variable environment facing farmers in Australia, and the risks they have to manage in making resource use decisions (Malcolm 2004, pp. 412-414). Variations in risk attitudes were incorrectly built into farm performance differentials.
Finally, a corollary of the above constraint on the scope for farmers to take action on finding their performance ranks below that of other farms was a failure to identify causal relations between farming practices and comparative performance. The results of a comparative analysis provided few clues for taking action to improve farm performance. The environments in which farmers operate vary considerably, as do farm structures and sizes. To identify causes of differences in farm performance through comparative analysis required grouping the farms into like categories in order to compare their performances. Remedial action was difficult to take successfully and was seldom accurate because farms differed on many criteria.
The task of re-establishing comparative analysis as an acceptable, and respectable, tool in farm management, albeit under its new guise of benchmarking, is not helped by the fact that all criticisms made of comparative analysis in the previous section could be directed at benchmarking as commonly practised today. Need it be so? We argue below to the contrary providing certain analytical practices are followed to redress the shortcomings that have been identified.
Economics must be the core discipline in any farm benchmarking process (the term we shall use exclusively from now on). It is necessary—and possible—to place it there. We now set out the requirements to achieve this aim. Traditional benchmarking practices use single input and output measures to indicate levels of farm performance. The interactive nature of farm inputs and outputs dictates that a superior approach is needed. Such an approach is possible using production frontiers that show the effects on all outputs of different combinations of inputs and thus better reflect the trade-offs and complementarities that exist in input use and combinations of production activities. Indeed, use of the term ‘frontier’ is fundamental to specifying economic efficiency in a general equilibrium framework.
To ensure the rigorous application of economic principles that Malcolm (2004) demanded, analyses need to be based on the application of microeconomic principles as enunciated in a host of microeconomic text books, such as Pindyck and Rubinfeld (2001, pp. 578-585). The obvious place to view a rigorous application of these microeconomic principles to efficiency and productivity analysis is from econometric text books such as Coelli et al. (2005), Greene (2004) and Kumbhakar and Lovell (2000).
Pindyck and Rubinfeld (2001, p. 579) define a technically efficient production process as one in which ‘the output of one good cannot be increased without reducing the output of another good’. In other words, the producer is adopting ‘best-practice’ production methods for a given production technology, and all points on the production contract curve represent technically efficient combinations of labour and capital. In addition to a sub-optimal mix of inputs, there are numerous other reasons why a producer might be technically inefficient and operating inside the production possibilities frontier.
Coelli et al. (2005, p. 5) define allocative efficiency in inputs as ‘selecting that mix of inputs (e.g., labour and capital) which produce a given quantity of output at minimum cost (given the input prices that prevail)’. The implication here is that not all positions on the production contract curve need be allocatively efficient. Further, the producer might be both allocatively and technically efficient in input use but not be allocatively efficient in terms of output prices: producing where the marginal rate of transformation between outputs, reflected by the slope of the production possibilities frontier, equals the slope of the isorevenue curve, reflecting relative output prices.
The product of technical efficiency and allocative efficiency provides a measure of economic efficiency. A producer maximises profit when he or she attains the highest level of economic efficiency possible subject to resource constraints and constraints imposed by the scale of operations on the farm. The concept of economic efficiency can be linked back to that of net farm operating profit in that a producer who maximises economic efficiency would be maximising net farm operating profit. To the extent that total gross margin is used as a proxy for profit on a particular farm, maximising economic efficiency could be said to be equivalent to maximising total gross margin. If the analysis is at the individual activity level and the activity gross margin is used as a proxy for the profit of an activity, maximising economic efficiency in the production activity is akin to maximising the activity gross margin.
It is possible to extend the efficiency analysis by also examining scale efficiency. Scale efficiency is different from scale economy in that a producer who is a price-taker exploits scale economies by attempting to produce at that level of output where long-run average cost is minimised. Scale efficiency, on the other hand, is a relative term in that it represents the lowest production cost achievable by producers in the benchmarking sample for a given output level after controlling for technical inefficiency. That is, the most scale-efficient farm in a benchmarking sample is not necessarily producing at the point of minimum long-run average cost but is the farm producing closest to that point. The measure of technical efficiency can be disaggregated into pure technical efficiency and scale efficiency.
Ideally, we would like to identify ways in which all but the ‘best-practice’ or frontier producers can alter the ways in which they manage their resources to improve their overall farm performance. To achieve this, we need to identify these farmers, and not some proportion of farmers in a particular performance band, and compare the performance of other producers in relation to these best-practice producers. Further, we would like to be able to identify ‘peers’ for particular farmers who are relatively inefficient. A peer is a farmer who operates on the production frontier but with attributes and farm structure that bear the closest resemblance to those of the inefficient farmer. This approach provides a way to broaden the proportion of farmers who can benefit from benchmarking.
The solution to overcoming the deficiencies of partial performance measures is to obtain an overall measure of farm performance. This measure should take into account all farm inputs used and farm outputs produced, and provide a consistent ranking across many producers. Then, and only then, can a benchmarker make meaningful comparisons of farm performance across many producers in a benchmarking sample. As discussed below, a set of powerful analytical tools is now available to enable such comparisons to be made.
Three performance measures meet the criteria to establish benchmarking as a suitable analytical tool for making decisions about resource use on the farm. They are technical efficiency, allocative efficiency and scale efficiency. These measures are holistic in that they can be constructed to take account of all resources used and outputs produced on the farm.
As a concept, economic efficiency has two major advantages in benchmarking. First, it provides a sound basis for a whole-farm comparison of profits (or gross margin if overhead operating costs are excluded) across farms that is independent of the level of resources available to the farmer. A beneficial feature of economic efficiency measures is that they identify the best-practice producers, and measure the performance of other producers in relation to these best-practice producers.
Second, these measures are easy to interpret in that the best possible performance is given an index of 100 per cent (or 1.0) and producers who are not at the best-practice level obtain an index between 0 per cent (implying a highly unlikely event of no output whatsoever) and 100 per cent. The distance they are below 100 per cent measures the extent to which these inefficient producers are capable of improving farm performance if they were able to reach the standard of the best-practice producers. A producer who currently has a technical efficiency index of 0.7, or 70 per cent, has the potential to increase output of the farm, or farm activity, being benchmarked by 30 per cent (1.0 minus 0.7), using the same amount of inputs. While meeting the stipulation that performance indicators should be calculated for the whole farm, the measures outlined above also allow the possibility to obtain efficiency scores in individual farm activities.
Total factor productivity is a more comprehensive measure than technical and scale efficiency measures. It incorporates differences between farms in production technology whereas methods used to estimate technical and scale efficiency assume a constant production technology across all farms in the benchmarking sample. The distinction between differences in production technology and technical efficiency is a producer who is technically inefficient, lies beneath the production frontier. An improvement in technical efficiency occurs when an inefficient farmer moves closer to the production frontier. On the other hand, adoption of an improved production technology leads to an upward movement in the production frontier. If farms do use different production technologies, it might pay to use total factor productivity as a measure of farm performance.
Two recent developments in the technical efficiency literature allow for production variability and the risk attitudes of producers to be taken into account when measuring technical inefficiency. These approaches have the advantage of purging from technical inefficiency estimates the effects of risk management decisions.
First, the risk attitudes of producers can be implicitly recognised when modelling production in order to measure the technical inefficiency of each producer. The second approach is to recognise that producers react differently to different states of nature. As Malcolm (2004, p. 413) observed,
It is overly simplistic to reduce farm decision analysis to analyses of ‘once and for all options’. Making a decision is just the first step. The next steps are to apply the decision and respond as the farming world changes … .
In particular, producers are likely to change their resource use decisions as seasonal conditions change during the year. The methodologies underlying the econometric analysis in implementing these two approaches are discussed below.
The major interpretive difficulty with the typically blunt benchmarking measures currently used for Australian farms is in distilling factors under the control of farmers from those outside their control. There are some obvious factors that farmers can do little to alter, such as input prices, output prices and climate. Any benchmarking endeavour needs to control for these environmental influences on farm performance, especially rainfall. Methods of efficiency analysis, described below, can cater for this environmental diversity.
Failure to identify causal relationships between performance and production factors led analysts recently to propose a set of ‘profit drivers’ that could be used as a set of explanatory variables on which to regress measures of farm performance using ordinary least squares regression analysis. But many of the so-called ‘profit drivers’ are simply alternative partial measures of performance. They are more accurately termed indicators of farm performance or symptoms of a farm problem than variables explaining performance.
Further, the use of ordinary least squares regression requires a highly restrictive assumption that seldom stands scrutiny in agricultural production systems: each causal factor is assumed to operate independently of other factors. Take the example of using stocking rate as a ‘profit driver’ in pastoral industries such as wool and lamb production without considering its interactions with other factors influencing production. Consider the simple case of producers with a relatively low performance measure who have a low stocking rate and poor pasture and grazing management. They are unlikely to raise performance in the long run simply by adding more sheep to the flock that they run.
There are two potentially superior objective approaches to ordinary least squares analysis to measure the effects of production factors on farm performance, and one potentially valuable subjective approach. The most common objective approach is to embed an analysis of causal relationships between technical efficiency and production factors within the model for estimating technical efficiency scores. There is now a vast literature reporting the results of such studies that model factors causing variations in technical inefficiencies between farmers simultaneously with estimating efficiency scores. While it is not possible to carry out the same sort of one-step procedure with estimates of allocative efficiency, a valuable exercise is to explore how allocative inefficiency could be reduced by identifying which inputs are over-used and which are under-used, given input prices, and the extent to which each input is either over-used or under-used for maximising profit.
The second objective approach is to identify elements within the production process that influence farm performance and study the relations between these elements. This approach avoids the fallacy mentioned above of the producer trying to improve farm performance by increasing stocking rate in that it would enable the analyst first to establish the relations that exist between poor pasture production and grazing management on one hand and low stocking rate on the other. It could be used directly to examine links between profit (economic efficiency) and production factors, and between these factors, but a preferable method is first to decompose economic efficiency into its technical and allocative efficiency components and conduct principal components analysis on each efficiency measure.
The subjective approach is to rely on the expertise of the benchmarkers and their intimate knowledge of the circumstances and capabilities of each farm operator. Armed with information about discrepancies in technical performance between farms, these people are able to discern differences in management between farms that cause these discrepancies and assemble a set of remedies for the less efficient farmer.
In addition to analysing factors influencing whole-farm performance measures, it is desirable to study factors influencing performance in specific activities. An ability to ‘drill down’ from an analysis of whole-farm performance to an individual activity performance is crucial for identifying factors causing inter-farm variations in overall farm performance. The same set of individual performance measures can be calculated at this disaggregated level and used to undertake significance tests on the influences of selected farm- and farmer-related variables on these measures. Factors causing these variations are likely to differ between technical, allocative and scale efficiency.
Analysts of technical and allocative efficiency have employed a variety of mathematical programming, index number and econometric methods to measure technical, allocative and scale efficiency, and total factor productivity.
The mathematical programming approach is called data envelopment analysis (DEA). It can also handle multiple inputs and multiple outputs. As defined by Coelli et al. (2005, p. 162):
DEA involves the use of linear programming methods to construct a non-parametric piece-wise surface (or frontier) over the data. Efficiency measures are then calculated relative to this surface.
Fraser and Hone (2001) used DEA to calculate Malmquist indices of TFP change for wool production in the Farm Monitor Project (FMP) benchmarking group in south-west Victoria. Fried et al. (1999, 2002) proposed a technique that allows environmental differences and statistical noise to be incorporated in an evaluation of producer performance based on a DEA framework. All producers are placed into a common operating environment and a common state of nature, which enables the estimate of pure managerial inefficiency. Henderson and Kingwell (2005) applied this method in accounting for rainfall when measuring technical efficiency on broadacre farms in south-western Australia.
Coelli et al. (2005, p. 86) defined an index number as ‘a real number that measures changes in a set of related variables’. Of specific relevance to this paper, index numbers are commonly used to measure changes in total factor productivity. Principal components analysis can then be used to explain variations in productivity between farms. ABARE (2004) reported on various estimates of TFP change in the sheep and other agricultural industries in Australia using Tornqvist indices.
The application of econometric analysis, in the form of stochastic frontier production analysis, is the preferred method to use because it takes into account the stochastic nature of sheep production. This method is based on the estimation of a model of the stochastic frontier production function in which an additional random error is added to the non-negative random variable that enables measurement of the ratio of a farm’s output to the potential output defined by the frontier function for a given set of farm inputs used (Coelli et al. 2005, pp. 242-244). Fleming et al. (2005) employed this approach in their analysis of benchmarked farms that are used as case studies below.
Stochastic frontier production analysis is limited in its application when producers undertake a number of different activities because it can only accommodate a single aggregated output. In these situations, stochastic distance functions should be estimated as they can handle multiple inputs and multiple outputs. This method has the additional advantage that it enables the analyst to identify the nature of the production possibilities frontier between pairs of outputs.
As mentioned above, a problem arises in accounting for managerial inefficiency when production conditions or production technologies vary between farms in the benchmarking sample. This situation can be handled in stochastic frontier production analysis by estimating a meta-frontier method, devised by Battese, Rao and O’Donnell (2004). This method allows for a number of different stochastic production frontiers to be estimated beneath one meta-production function. Technical efficiency estimates are made according to which production function is relevant to a particular producer, and gaps in production potential between regions with different environmental conditions can be estimated.
The problem of accounting for non-stochastic environmental variables and production risk can be addressed using the stochastic production frontier with heteroskedastic error structure. Risk plays a vital role on input allocations and therefore output supply. A simple way to account for risk is to append another variable to the frontier model to represent the combined effects of any variables that are unobserved at the time input decisions are made. Empirical applications include Battese, Rambaldi and Wan (1997) and Villano and Fleming (2005), but none has yet been undertaken in Australian agriculture.
The stochastic frontier model can be further generalised to accommodate the risk preferences of individual decision makers without assuming a direct utility function. In this case, the method devised by Kumbhakar (2002) can be used. A more advanced and complex methodology that takes into account different states of nature is addressed using the state-contingent production frontiers. This method is proposed by O’Donnell and Griffiths (2006). Again, no empirical analysis using this method has yet been undertaken in Australian agriculture.
Each of the three modelling methods of stochastic frontier production analysis, data envelopment analysis and index numbers has its advantages depending on the objective of the analysis. For example, data envelopment analysis enables the analyst to identify peers for inefficient farmers, which is most useful for determining courses of action for farmers to improve their performance. On the other hand, it is a deterministic approach, and analysts applying stochastic frontier production analysis are better able to handle the stochastic nature of agricultural production. Index numbers are particularly useful for estimating changes in total factor productivity. Coelli et al. (2005) provide a detailed critique of the different methods.
Four case studies were used to assess the practicalities of using advanced benchmarking methods to explain differences in performance among sheep producers. The FMP benchmarking group is operated by DPI Victoria, which has been benchmarking farms in south-west Victoria for a long period. A second benchmarking group is operated by JRL Hall and Co., a long-established farm consultancy servicing farms in south-west Western Australia. The third group, Holmes Sackett and Associates, is also a well-established farm consultancy with clients throughout New South Wales, south-west Victoria and Tasmania, and with a small number of clients in Queensland. Finally, a smaller set of observations over a shorter time period was obtained from the Mackinnon group based at the University of Melbourne, covering farms in the main sheep-producing regions of Victoria.
The samples used in the analyses were clearly biased in two related ways. First, the farmers are self-selecting rather than being randomly selected in that they choose to belong to a benchmarking group and have their farm performance recorded and compared against other producers in the group. Second, in three of the four groups, producers receive technical and financial advice to improve their farm performance, a service taken up by only a small proportion of sheep producers in Australia. This advice probably enables them to improve their performance at a higher rate than sheep producers in general in their region and throughout Australia. However, this sample bias is only a problem for average estimates, and not for estimates for farms on the production frontier. Furthermore, performance estimates for farmers in the benchmarking group can be usefully compared with those for all sheep producers to assess the overall potential for improvement.
There are five beneficial features of the estimates obtained by using data from the benchmarking groups, which became clear as the analytical work progressed and outweigh any shortcomings caused by the sample bias. Two of the most important relate to data quality. First, farmers in most cases pay to belong to the group and have a vested interest in the collection and use of accurate data. Second, inaccuracy of production data is a chronic problem where farmers are asked to fill out questionnaires that are checked by people who are not familiar with farm operations. Benchmarkers who provide consultancy advice, on the other hand, understand the complexities of sheep production systems and have an intimate knowledge of the operations on each farm, enabling them to vet the data for errors. Their knowledge and observationary powers enable three further advantages to be exploited. They have an ability to provide feedback to modellers to improve estimation procedures in efficiency and productivity analyses. Their skills enable them to interpret model results better than analysts without this knowledge base, providing explanations for particular estimates of productivity and efficiency estimates. Finally, these abilities also enable benchmarkers to make good use of the traditional benchmark indicators, overcoming to a considerable extent the criticisms of these measures that are made above.
The data used in the analyses undertaken were confined to the past decade. It was an easy task to assemble the required data sets for estimation from existing data sets kept by the benchmarkers provided they were stored in easily accessed spreadsheets and there was a staff member who was capable of using pivot tables. One proviso is that it is necessary to deflate costs and revenues to use imputed outputs and inputs, as was done in the case study analyses. For confidentiality reasons, no farm-specific data are reported here.
Estimates of technical efficiency indices and TFP estimates for specialist wool producers were obtained successfully for all groups. Variations in climatic conditions were taken into account by using dummy variables to represent six different types of seasonal conditions, from ‘excellent’ to ‘very bad’. Apart from the farm-level estimates for individual farms in each year, it proved possible to estimate group-wide changes in TFP over time. The only difficulty in the latter respect occurred when there were major changes in the benchmarked farms between years, which made mean trend estimates dubious.
Interestingly, substantial technical progress took place in two of the groups advised by consultants while no significant technical progress was observed for a third group in which farmers did not receive management advice. In the former two groups, technically inefficient farmers fell further behind the frontier although in general they achieved reasonable rates of TFP growth. In the group for which no technical progress was observed, inefficient farmers got closer to the production frontier over the period and achieved rates of TFP growth commensurate with average producers in the other two groups.
An example of the usefulness of comparative analysis can be gauged from the results for two individual farms shown in Figure 1. (For confidentiality reasons, the details of farms and benchmarking groups are not revealed.) The distributions are in five percentile intervals, with farms in the extreme right histogram recording technical efficiency indices from 0.95 to 1.00 (95 per cent to 100 per cent). The rightward shifts of the distributions reflect the estimated technical progress occurring during the decade. One farm, represented by a star, is close, but varies from year to year in its proximity, to perfect technical efficiency. In general, it performs within 5 per cent of best practice, which means that this farm achieves a high rate of TFP growth given the quite high rate of technical progress taking place in the benchmarking group. Any variation from one year to the next, such as the deterioration in performance in the second year, can be identified and explored by the benchmarker.
In contrast to the first farm, the second farm denoted with a hexagon in Figure 1 has a lower technical efficiency index of about 0.92, close to the mean for the group, in the initial year. It falls behind best-practice farms over time and finishes with an index of around 0.77. But it can be seen that this farm has not gone backwards in terms of productivity: its position in the final year of the study period is to the right of its position in the first year, indicating a modest increase in TFP. Again, a benchmarker would be able to identify any unusual inter-year shift in technical inefficiency. Benchmarkers would be able to use their intimate knowledge of the farm to seek reasons why such a shift has occurred and provide necessary advice on remedial action.
Multi-input multi-output analysis was successfully undertaken using the stochastic input distance function for FMP sheep farms. Lamb output was specified in addition to wool output, allowing an estimate of complementarities (scope economies in economic parlance) that were found to be strongly present. For farms running both activities, results indicate that there is a need to take into account all activities on mixed farms as the trends in technical efficiency and TFP differed from those obtained for these farms when examining only wool output.
This finding was strengthened by the whole-farm estimation of technical efficiency and TFP growth on farms in the Holmes Sackett and Associates benchmarking group in the various sheep-producing regions of New South Wales. A similar TFP trend was recorded as for specialist wool production. An interesting finding is the presence of significant complementarities in production between sheep and crop production, sheep and beef production, and beef and crop production.
Estimation of allocative efficiency in outputs proved to be a feasible option yielding useful results. The major problem that a benchmarker would face in providing advice on the optimal combination of wool and lamb is the volatility in the wool-lamb price ratio. A sensible solution suggested by one of the benchmarkers is to provide recommendations for a range of relative prices in which there is a reasonable degree of confidence that the true relative price would fall. Given that producers can quite easily vary the proportion in which they undertake each activity, this solution would provide a good indication of where producers should operate. The range could be updated in light of new information about likely trends in relative prices.
A slight problem with this approach is that it can be difficult to distil changes in allocative efficiency from changes in productivity and technical efficiency. Consider the recent trend towards finer wool that is evident from the benchmark data. In any given year, implicit wool output takes into account quality differences such as fibre diameter, vegetable matter and staple length that are reflected in price. But it also means that changes in wool prices for different micron categories from one year to the next due to exogenous factors in the wool market will show up as changes in implicit output. A downward trend in the fine wool premium has occurred in recent years, partly in response to genetic advances and partly because of widespread drought conditions. A consequence of this trend has been that the relative value of wool output from fine wool production to broad wool production has fallen, wool yield and other quality factors remaining unchanged. It should probably be viewed as a change in allocative efficiency rather than a change in productivity or technical efficiency.
No success was achieved in estimating allocative efficiency in inputs. The reason was not methodological but data deficiencies. In particular, severe difficulties were encountered in pricing three key farm inputs—land, labour and fixed assets—at their true opportunity costs. This problem is not unique to the sheep industry but is common throughout Australian agriculture.
It proved possible to test for the presence of scale efficiencies by running DEA models for a sub-set of farmers who were regularly benchmarked in one of the benchmarking groups. No significant scale efficiencies or inefficiencies were found to exist. This result could have been due to the fact that neither very large nor very small farms were included in the sub-sample. Similarly, summing estimated partial output elasticities in the production function suggested no strong scale economies.
Rutley (2006) applied principal components analysis in this way on technical efficiency for the benchmarked farms, with mixed results. He was successful in applying the approach and identifying a number of principal components. But it proved difficult for the benchmarkers to draw any clear message from the groupings of factors influencing technical efficiency. Furthermore, Rutley found that many factors contributed small amounts in explaining differences in technical efficiency between farms. The implication of this finding is that there appears to be no one or a small number of key changes that benchmarkers could focus to lift the efficiency level of inefficient farms. This finding is perhaps unsurprising given the vast number of actions that a wool producer has to get right in order to achieve best practice (Scott 2004).
Use of the one-step regression approach could prove useful. For example, it showed up an apparently anomalous situation with regard to the relationship between stocking rate and technical efficiency in one benchmarking group.
We tested the proposition that gross margin is an adequate proxy for technical efficiency within a year and total factor productivity across years, despite the various shortcomings of the former that are noted above. When comparing them with TFP estimates, output values and input costs comprising the gross margins were deflated by their relevant price indices between years. Results were mixed across the benchmarking groups. A high correlation was found for one homogeneous group of farmers who were facing similar environmental conditions and had received similar production advice for some years. Correlations were moderate for another group and low for a third group as less homogeneity existed among farmers.
Taking risk attitudes of farmers into account when deriving estimates of technical and allocative efficiency requires the application of sophisticated analytical methods. Few empirical studies have been undertaken globally to date, and no known studies have been made for agricultural production in Australia. Given the risky environment in which most farmers operate, this is an area urgently in need of empirical analysis.
The potential for using comparative analysis under its new guise of benchmarking is the topic of this paper. We describe the traditional methods of comparative analysis and discuss the deficiencies that surrounded the fall in its reputation. These deficiencies are summarised as the neglect of economic principles, limited scope for action based on comparisons made, a failure to establish causal relations between farming practices and performance, lack of a holistic approach and failure to take account of production risk.
Each of these deficiencies is diagnosed and we suggest some remedies by carefully selecting farm performance criteria and using long-established and recent methods of efficiency and productivity analysis. Efficiency and productivity analysis enables a benchmarker to examine economic efficiency and its components, technical efficiency and allocative efficiency, over a number of variables by using frontiers to capture the relationships between several inputs and several outputs. This form of analysis is necessary where farm inputs are not necessarily monotonic and where both substitute and complementary relationships exist between them.
We report on four recent case studies entailing the application of advanced efficiency and productivity methods that enable benchmarkers to examine economic efficiency over many variables by using frontiers to capture the relationships between several inputs and several outputs. Attention is drawn to successes achieved, some limitations and areas where analyses are still needed. We also emphasise the continued need for skilled and experienced benchmarkers to check the data for accuracy, and to interpret and apply results.
ABARE 2004, ‘Australian sheep industry productivity’, Australian Lamb 04.2, 1-6.
Barnard, C.S. and Nix, J.S. 1979, Farm Planning and Control, 2nd edition, Cambridge University Press.
Battese, G.E., Rambaldi, A.N. and Wan, G.H. 1997, ‘A stochastic frontier production functions with flexible risk properties’, Journal of Productivity Analysis 8, 269-280.
Battese, G.E., Prasada Rao, D.S. and O’Donnell, C.J. 2004, ‘A metafrontier production function for estimation of technical efficiencies and technology gaps for firms operating under different technologies’, Journal of Productivity Analysis 21, 91-103.
Coelli, T.J., Rao, D.S.P., O’Donnell, C. and Battese, G.E., 2005, An Introduction to Efficiency and Productivity Analysis, 2nd edition, Kluwer, Boston.
Fleming, E., Villano, R., Farrell, T. and Fleming, P. 2005, ‘Efficiency and productivity analysis’, Volume 1 in E. Fleming, R. Villano, T. Farrell, P. Fleming and D. Rutley, Analysis of Factors Influencing the Technical Efficiency of Wool Production in Selected Benchmarking Groups, Report to the Australian Sheep Industry CRC, Armidale.
Fraser, I. and Hone, P. 2001, Farm-level efficiency and productivity measurement using panel data: wool production in south-west Victoria, Australian Journal of Agricultural and Resource Economics 45, 215-232
Fried, H.O., Schmidt, S.S. and Yaisawarng, S. 1999, ‘Incorporating the operating environment into a nonparametric measure of technical efficiency’, Journal of Productivity Analysis 12, 249-267.
Fried, H.O., Lovell, C.A.K., Schmidt, S.S. and Yaisawarng, S. 2002, ‘Accounting for environmental effects and statistical noise in data envelopment analysis’, Journal of Productivity Analysis 17, 157-174.
Greene, W. 2004, Econometric Analysis, 5th edition, Prentice Hall, Upper Saddle River, NJ.
Henderson, B. and Kingwell, R. 2005, ‘Rainfall and farm efficiency measurement for broadacre agriculture in south-western Australia’, Australasian Agribusiness Review 13, Paper 14.
Kumbakhar, S. and Lovell, K. 2000, Stochastic Frontier Analysis, Cambridge University Press.
Kumbhakar, S.C. 2002, ‘Specification and estimation of production risk, risk preferences and technical efficiency’, American Journal of Agricultural Economics 84, 8-22.
Malcolm, B. 2004, ‘Where’s the economics?’, Australian Journal of Agricultural and Resource Economics 48(3), 395-417.
O’Donnell, C. and Griffiths, W. 2006, ‘Estimating state-contingent production frontiers’, American Journal of Agricultural Economics 88(1), 249-266.
Pindyck, R.S. and Rubinfeld, D.L. 2001, Microeconomics, 5th edition, Prentice Hall, Upper Saddle River, NJ.
Rutley, D. 2005, ‘Principal components analysis of benchmarked farms’, Volume 2 in E. Fleming, R. Villano, T. Farrell, P. Fleming and D. Rutley, Analysis of Factors Influencing the Technical Efficiency of Wool Production in Selected Benchmarking Groups, Report to the Australian Sheep Industry CRC, Armidale.
Scott, J. 2004, Sustainable farming in the 21st century: implications for agriculture and law, Paper presented at an AgLaw Roundtable, University of New England, Armidale, 5 November.
Villano, R. and Fleming, E. 2005, ‘Analysis of technical efficiency in a rainfed lowland rice environment in Central Luzon Philippines using a stochastic frontier production function with a heteroskedastic error structure’, Asian Economic Journal 20(1), 29-46.