The stats of IP Regulation – Part 2: Monitoring

As promised, here are my thoughts on the RPBs’ 2017 monitoring activities, as reported by the Insolvency Service:

  • The InsS goes quiet on RPBs’ individual performances
  • Two RPBs appear to have drifted away from 3-yearly visits
  • The RPBs diverge in their use of different monitoring tools
  • On average, ICAEW visits were over three times more likely to result in a negative outcome than IPA visits
  • On average, every fourth visit resulted in one negative outcome
  • But averages can be deceptive…

As a reminder, the Insolvency Service’s report on 2017 monitoring can be found at: https://tinyurl.com/ycndjuxz

 

The picture becomes cloudy

As can be seen on the Insolvency Service’s dedicated RPB-monitoring web-page – https://www.gov.uk/government/collections/monitoring-activity-reports-of-insolvency-practitioner-authorising-bodies – their efforts to review systematically each RPB’s regulatory activities seemed to grind to a halt a year ago.  The Service did report last year that their “future monitoring schedule” would be “determined by risk assessment and desktop monitoring” and they gave the impression that their focus would shift from on-site visits to “themed reviews”.  Although their annual report indicates that such reviews have not always been confined to the desk-top, their comments are much more generic with no explanation as to how specific RPBs are performing – a step backwards, I think.

 

Themed review on fees

An example of this opacity is the Service’s account of their themed review “into the activities, and effectiveness, of the regulatory regime in monitoring fees charged by IPs”.

After gathering and reviewing information from the RPBs, the InsS reports: “RPBs responses indicate that they have provided guidance to members on fee matters and that through their regulatory monitoring; fee-related misconduct has been identified and reported for further consideration”.

For this project, the InsS also gathered information from the Complaints Gateway and has reported: “Initial findings indicate that fee related matters are being reported to the IP Complaints Gateway and, where appropriate, being referred to the RPBs”.

Ohhhkay, so that describes the “activities” of the regulatory regime (tell us something we don’t know!), but how exactly does the Service expect to review their effectiveness?  The report states that their work is ongoing.

Don’t get me wrong, it’s not that I necessarily want the Service to dig deeper.  For example, if the Service’s view is that successful regulation of pre-packs is achieved by scrutinising SIP16 Statements for technical compliance with the minutiae of the disclosure checklist, I dread to think how they envisage tackling any abusive fee-charging.  It’s just that, if the Service thinks that they are really getting under the skin of issues, personally I hope they are doing far more behind the scenes… especially as the Service is surely beginning to gather threads on the question of whether the world would be a better place with a single regulator.

So let’s look at the stats…

 

How frequently are you receiving monitoring visits?

There is a general feeling that every IP will receive a monitoring visit every three years.  But is this the reality?

1711

This shows quite a variation, doesn’t it?  For two years in a row, significantly less than one third of all IPs were visited in the year.  Does this mean the RPBs have been slipping from the Principles for Monitoring’s 3-year norm?

The spiky CAI line in particular demonstrates how an RPB’s visiting cycle may mean that the number of visits per year can fluctuate wildly, but how nevertheless the CAI’s routine 3-yearly peaks and troughs suggest that in general that RPB is following a 3-yearly schedule.  So what picture do we see, if we iron out the annual fluctuations?

1711a

This looks more reasonable, doesn’t it?  As we would expect, most RPBs are visiting not-far-off 100% of their IPs over three years… with the clear exceptions of CAI, which seems to be oddly enthusiastic, and the ICAEW, which seems to be consistently ploughing its own furrow.  This may be the result of the ICAEW’s style of monitoring large firms with many IPs, where each year some IPs are the subject of a visit, but this may not mean that all IPs receive a visit in three years.  Alternatively, could it mean they are following a risk-based monitoring programme..?

There are benefits to routine, regular and relatively frequent monitoring visits for everyone, almost irrespective of the firm’s risk profile: it reduces the risk that a serious error may be repeated unwittingly (or even deliberately).  However, this model isn’t an indicator of Better Regulation (see, for example, the Regulators’ Compliance Code at https://www.gov.uk/government/publications/regulators-compliance-code-for-insolvency-practitioners).  With the InsS revisiting their MoU (and presumably also the Principles for Monitoring) with the RPBs, I wonder if we will see a change.

 

Focussing on the Low-Achievers?

The alternative to the one-visit-every-three-years-irrespective-of-your-risk-profile model is to take a more risk-based approach, to spend one’s monitoring efforts on those that appear to be the highest risk.  This makes sense to me: if a firm/IP has proven that they are more than capable of self-regulation – they keep up with legislative changes, keep informed even of the non-legislative twists and turns, and don’t leave it solely to the RPBs to examine whether their systems and processes are working, but they take steps quickly to resolve issues on specific cases and across entire portfolios and systems – why should licence fees be spent on 3-yearly RPB monitoring visits, which pick up non-material non-compliances at best?  Should not more effort go towards monitoring those who seem consistently and materially to fail to meet required standards or to adapt to new ones?

But perhaps that’s what being done already.  Are many targeted visits being carried out?

1712

It seems that for several years few targeted visits have been conducted, although perhaps the tide is turning in Scotland and Ireland.  The ACCA also performed a number, although now that the IPA team is carrying out monitoring visits on ACCA-licensed IPs, I’m not surprised to see the number drop.

It seems that targeted visits have never really been the ICAEW’s weapon of choice.  At first glance, I was a little surprised at this, considering that their monitoring schedule seems less 3-yearly rigid than the other RPBs.  Aren’t targeted visits a good way to monitor progress outside the routine visit schedule?  Evidently, the ICAEW is not using targeted visits to focus effort on low-achievers.  Perhaps they are tackling them in another way…

 

Wielding Different Sticks

1713

I think this demonstrates that the ICAEW isn’t lightening up: they may be carrying out less frequent monitoring visits on some IPs, but their post-visit actions are by no means infrequent.  So perhaps this indicates that the ICAEW is focusing its efforts on those seriously missing the mark.

The ICAEW’s preference seems to be in requiring their IPs to carry out ICRs.  Jo’s and my experiences are that the ICAEW often requires those ICRs to be carried out by an external reviewer and they require a copy of the reviewer’s report to be sent to the ICAEW.  They also make more use than the other RPBs of requiring IPs to undertake/confirm that action will be taken.  I suspect that these are often required in combination with ICR requests so that the ICAEW can monitor how the IP is measuring up to their commitments.

And in case you’re wondering, external ICRs cost less than an IPA targeted visit (well, the Compliance Alliance’s do, anyway) and I like to think that we hold generally to the same standards, so external ICRs are better for everyone.

In contrast, the IPA appears to prefer referring IPs for disciplinary consideration or for further investigation (the IPA’s constitution means that technically no penalties can arise from monitoring visits unless they are first referred to the IPA’s Investigation Committee).  However, the IPA makes comparatively fewer post-visit demands of its IPs.  But isn’t that an unfair comparison, because of course the ICAEW carried out more monitoring visits in 2017?  What’s the picture per visit?

 

No better and no worse?

1714

Hmm… I’m not sure this graph helps us much.  Inevitably, the negative outcomes from monitoring visits are spiky.  We’re not talking about vast numbers of RPB slaps here (that’s why I’ve excluded the smaller RPBs – sorry guys, nothing personal!) and the “All” line (which does include the other RPBs) does illustrate a smoother line overall.   But the graph does suggest that ICAEW-licensed IPs are over three times as likely to receive a negative outcome from a monitoring visit than IPA-licensed IPs. 

Before you all get worried about your impending or just-gone RPB visit, you should remember that a single monitoring visit can lead to more than one negative outcome.  For example, as I mentioned above, the RPB could instruct an ICR or targeted visit as well as requiring the IP to make certain undertakings.  One would hope that much less than 25% of all IPs visited last year had a clean outcome!

This doubling-up of outcomes may be behind the disparity between the RPBs: perhaps the ICAEW is using multiple tools to address a single IP’s problems more often than the other two RPBs… although why should this be?  Alternatively, perhaps the ICAEW’s record again suggests that the ICAEW is focusing their efforts on the most wayward IPs.

 

Choose Your Poison

I observed in my last blog (https://thecompliancealliance.co.uk/blog/news/ip-regulation-complaints/) that the complaints outcomes indicated that the IPA was far more likely to sanction its IPs over complaints than the ICAEW was.  I suggested that maybe this was because the IPA licenses more than its fair share of IVA specialists.  Nevertheless, I find it interesting that the monitoring outcomes indicate the opposite: that the ICAEW is far more likely to sanction on the back of a visit than the IPA is.

Personally, I prefer a regime that focuses more heavily on monitoring than on complaints.  Complaints are too capricious: to a large extent, it is pot luck whether someone (a) spots misconduct and (b) takes the effort to complain.  As I mentioned in the previous blog, the subjects of some complaints decisions are technical breaches… and which IP can say hand-on-heart that they’ve never committed similar?

Also by their nature, complaints are historic – sometimes very historic – but it might not matter if an IP has since changed their ways or whether the issue was a one-off: if the complaint is founded, the decision will be made; the IP’s later actions may just help to reduce the penalty.

In my view, the monitoring regime is far more forward-looking and much fairer.  Monitors look at fresh material, they consider whether the problem was a one-off incident or systemic and whether the IP has since made changes.  The monitoring process also generally doesn’t penalise IPs for past actions, but rather what’s important are the steps an IP takes to rectify issues and to reduce the risks of recurrence.  The process enables the RPBs to keep an eye on if, when and how an IP makes systems- or culture-based changes, interests that are usually absent from the complaints process.

 

Next blog: SIP16, pre-packs and other RPB pointers.