Ontario Electricity System Operational Update #6: Deteriorating Reliability

Preface: As part of this site’s series addressing some of the challenges underlying Ontario’s electricity system operations, here is a report on delivery reliability.

Data reported by the Ontario Energy Board for the period 2005-2011 indicates a declining trend in power delivery reliability for Ontario’s urban consumers. These results indicate problems with the smart meter program and the Ontario Energy Board’s approach to regulating the electricity distribution sector.

The effectiveness of utilities in reconnecting consumers who get blacked out is called CAIDI — Customer Average Interruption Duration Index. To illustrate how the OEB’s annual reporting works, a CAIDI of 0.5 would indicate an average restoration time of 30 minutes whereas a CAIDI of 1.5 would indicate 90 minutes.

Here is the data, as reported in successive Electricity Distributor Yearbooks, for the largest 10 urban distribution utilities in Ontario. These utilities serve a little under half of all the consumers in the province. The utilities in this sample tend to be the most advanced of the province’s distributors, as indicated by factors such as a rapid implementation of smart meters.

CAIDI Results (hours) customer #
(2005) 2005 2006 2007 2008 2009 2010 2011
Enersource Hydro Mississauga 178,140 0.567  0.62  0.83  0.45  0.53  0.50  0.47
Enwin 84,254 0.840  0.63  0.57  0.49  0.47  0.55  0.91
Horizon 230,327 0.658  0.65  0.64  0.83  0.65  0.74  1.28
Hydro One Brampton 116166 0.605  0.58  0.68  0.69  0.62  0.60  0.65
Hydro Ottawa 278,581 1.100  1.27  1.15  0.97  1.30  1.37  1.74
Kitchener-Wilmot Hydro 79,487 1.304  0.71  1.18  0.90  0.47  0.72  1.12
London Hydro 138,046 0.695  0.59  0.69  0.96  0.56  0.84  0.78
Powerstream 203,749 0.557  0.80  1.40  0.95  1.60  0.67  1.04
Toronto Hydro 676,678 0.871  0.72  0.86  0.70  1.56  0.77  0.93
Veridian Connections 100,802 0.542  0.92  1.07  0.98  1.51  0.67  0.86
arithmetic average 0.77 0.75 0.91 0.79 0.93 0.74 0.98
weighted average 0.79 0.78 0.91 0.78 1.13 0.79 1.03

So far I have only been able to collect 2012 CAIDIs for Toronto Hydro (0.93) and Enersource (0.42)

The weighed average reflects the number of customers served by each utility and is a more accurate reflection of the overall trend than the arithmetic average.

A deteriorating trend in reliability for urban consumers is clearly evident in the weighted average results.

There is an appendix at the end of this presentation addressing data limitations and areas for improvement.

Year-over-year changes in CAIDI for a particular utility are strongly influenced by weather. For example, many utilities suffered harsh weather in 2007.

It is informative to consider reliability trends over longer time periods. Over longer time periods, utility operating practices rather than the quality of the fixed assets serving consumers strongly influence CAIDI.

While the absolute level of CAIDI between utilities can be reflective to some extent of exogenous factors such as customer density, the trend in CAIDI results between utilities can be directly compared.

It is particularly noteworthy that not only have customers served by Enersource enjoyed the best reliability of the comparators, but that service for Enersource consumers has been improving.

In stark contrast, Hydro Ottawa’s reliability numbers have been consistently among the worst of the Big 10 distributors since 2005 and are trending in the unfavourable direction over the period. Consumers of Hydro Ottawa have grounds for serious concerns about the management of that utility. Note that the current OEB chair was President and CEO of Hydro Ottawa Group of Companies from 2005 until April 2011.

The deteriorating CAIDI trend for the overall group, notwithstanding improvements by Enersource and fairly steady and better than average performance by Hydro One Brampton, is a strong indicator that  the smart meter program is not delivering on one of its key promises and also that the OEB’s regulatory approach is not focusing sufficient attention on reliability.

Since the smart meter program was officially launched in 2004, the biggest direct service benefit of the program promised to consumers was faster recovery from outages. The capability of smart meters in automatically signalling outages to utility control centres was trumpeted by the official advocates for the program.

The smart meter program has been expensive. As of September 30, 2010, the total consumer investment in one portion of smart meter program across the province was $994,426,187. This figure reflects the capital costs of meter installation of the distribution utilities only and does not reflect distributor operating costs nor other costs in the system, such as upgrades to distributor Customer Information Systems to accomodate smart meter data and costs incurred by other agencies related to smart meter data management, such as the IESO.

The official smart meter program has produce no evidence of systematic reliability improvement, but anecdotal claims of benefits by smart meter and smart grid advocates are common.

The Ontario Energy Board has been pursuing incentive regulation with most distribution utilities. The incentive program encourages utilities to cut costs, particularly operating costs, but does not set minimum service standards for consumers. Costs for outage repair services are recovered in operating cost budgets. It appears that many utilities have been responding to their regulatory incentives by allowing reliability to slide.

The recent Ontario Electricity Distribution Sector Review lauded the smart grid program at length but did not address the deteriorating trend in quantified reliability results.

 Appendix: Improving Data Reporting

A relatively minor measurement issue could affect this data. Unfortunately, the Ontario Energy Board’s approach for reporting CAIDI results does not require the data to be generating using a consistent methodology. The tracking systems used by utilities in Ontario for accumulating the data underpinning these measurements depend on the type of technology used. More advanced technologies are able to more accurately report a wider variety of customer service interruptions. The implication of this lack of consistency is that less technologically advanced utilities tend to under-report their actual CAIDI. Utilities upgrading their tracking systems in the midst of the time series would tend to report a deterioration in their results for equivalent actual performance. To put this effect in context, an increase in CAIDI in the order of 5% is sometimes seen due to measurement technology changes.
Other problems with the OEB’s approach are that the results reported do not break out the impact of planned outages and loss of transmission supply to the distribution utility. Customer service is sometimes interrupted due to scheduled equipment upgrades and some types of maintenance. Similarly, transmission outages show up in the raw CAIDI results presented here. In the case of Toronto Hydro over the period 2006-2011, loss of supply contributed CAIDI impacts of between 0.03 and 0.05 hours per year.

5 Comments

  1. This is not a “trend”. What you have are two years (2009 and 2011) in which a number of these distributors appeared to have problems restoring services. Utilities generally set +/- 10% as the measure beyond which they start looking for problems or taking praise in relation to CAIDI.

    “CAIDI is also used in some instances by some utilities as an indicator, it must be noted that its calculation can lead to false conclusions (where SAIFI and SAIDI are both improving but unevenly leading to an increase in CAIDI). It also gives an average restoration time for an average customer and does not take into account the configuration of the distribution system or the nature of the interruptions.” (EB-2010-0249: Hydro One Networks’ Comments on – Phase 2 – Initiative to Develop Electricity Distribution Reliability Standards, p.2)

    In addition to the issues raised by Hydro One, your data does not indicate whether force majeure events are included or not and how such events are determined.

    Without knowledge about SAIFI, SAIDI, force majeure and the nature of the interruptions, it is impossible to say whether you have identified anything more than random noise.

    • Tom,

      I am not reporting any conclusions, but am making some important observations about the conclusions that you are reporting. You are claiming that your data and analysis support the rejection of the null hypothesis (i.e. that the means are equal between years). Statistically, if you want to compare means across and between different populations and different time periods (years in this case) in order to draw conclusions, then your data has to meet certain criteria. Your data do not meet the following criteria and that is why I am raising questions about your claims.

      Criterion #1. Normal distribution. I have strong reason to believe that the data for CAIDI are either skewed or multi-modal. This would introduce problems when using the “mean” or “weighted mean” to understand trends in the data. CAIDI is never going to be “zero” due to the nature of the systems, which are designed with redundancies (i.e. to accept some failure) in order to avoid catastrophic failure. So, that leaves two potential ways in which the utilities may be underperforming: a) routine clearing of faults and other routine events, and b) responding to more extreme or system-wide events. The use of weighted means is not going to tell us much about a system that has (at least) two very different kinds of failure, and that is why I asked for more information about the nature of the larger or system-wide failures that may have driven up CAIDI for only some utilities in some years. This may be contained in the document you referenced, (though I could not locate it), but regardless it does not change the nature of the distribution of the data, which is not normally-distributed and therefore not properly analyzed through the use of means or population-weighted means (see #2).

      Criterion #2. Homogeneity of Variance. The variance in the systems that you are comparing are heterogenous. And this heterogeneity is not based only on the size of the system, but depends also on the nature of the system and the nature of the customers. For example, smaller systems that are primarily delivering electricity at distribution voltages would be more likely to have many more but shorter black out periods than systems delivering more often at transmission voltages. I don’t see how the systems can be blended with each other in the way that you have done so in order to then compare means because the variances in their performance cannot be the same. The data for the different distributors can only be analyzed in the way that you have attempted to do here if they are similar systems. They are not: Veridian is nothing like Toronto Hydro.

      The fact that your data do not meet these criteria means that you are much more limited in what you can say. See my next post.

  2. So, what CAN you do with the data to try to understand whether there is a problem here or not? I see two possibilities, both of which require more data than you have presented here (or that I can locate):

    1) compare the utilities against their own prior reliability performance (over a sufficient number of years) in dealing with both routine and extreme conditions, and then see if there is a trend across many of the utilities towards poorer performance (i.e. not by comparing means but by identifying whether many distributors are performing worse beyond a defensible threshold)

    In order to do this, you would really need to separate out the CAIDI related to routine faults and the CAIDI related to more extreme events and analyze them separately (i.e. deal with the fact that the data is not normally distributed). Since CAIDI are not reported that way (at least not that I am aware of), then the best we can do is try to make some specific inquires into, for example, why certain utilities have extreme year-to-year differences (e.g. Kitchener-Wilmot), or why certain years appear to be very high for some utilities but not for others, keeping in mind the different nature of the systems.

    2) compare the Ontario system to other systems of a similar size and nature (e.g. BC, Quebec, Manitoba) to see if Ontario is underperforming as a whole

    The danger in this is that we commit the same error in comparing provincial systems that I pointed out in #1 in relation to distribution systems. System-wide comparisons may also not be very satisfying for you since you are trying to identify problems at the distribution level. In that case, the only data that is filed (that I am aware of) for other Canadian distributors is for HQ Distribution, Fortis BC, and Fortis NL.

    In order to do this, we would need to look at data from other systems and we would need to have Hydro One data since the data for the other systems include both the transmission and distribution systems. The CEA keeps records for many years back and for a (somewhat hefty) fee you can access that data. (I can send you what I have if you are interested).

    I admire your efforts to try to elevate the importance of reliability to the Ontario distribution system, but the data and your analysis of it are not adequate to make the conclusions you are making.

    Regards,
    Rick

  3. Rick, you raise some interesting points. I am feeling much less secure about the validity of my original observations than when I posted the data. I’m still working through your comments and hope to be back with a more substantive response soon.

Comments are closed.