In our latest insight, Rob Parry, a Managing Director in our Disputes, Investigations & Valuations team discusses his concerns over the use of AI and how, for now, nothing beats human experience and expertise gained over many years.

Until relatively recently, I was reticent about the use of AI beyond turning pictures of my dog into artwork, and reluctant even for that frivolous use when I realised how much energy each AI request uses.  For important things, AI felt like a black box, whose inner workings I couldn’t explain.

In the eponymous TV series, Blackadder said to the Prince Regent “I am one of those people who are quite happy to wear cotton, but have no idea how it works”.  I’m not an AI expert and I don’t know how AI works, but I am less trusting of the output of AI than Blackadder was of the output of the “ravelling Nancy”.

Time for me to re-appraise AI?

When someone close to me needed important medical treatment, I was initially surprised that their treating surgeon used AI when determining their treatment plan.  That made me pause.  I thought, if someone so highly experienced and qualified can use AI for something so important as medical treatment, then maybe I need to re-appraise.  I have nearly 40 years in my chosen profession, and I suspect that surgeon has more.  You’d think they’d seen and done it all when it came to this condition, and would intuitively know the best treatment plan.  

Given my AI luddite status, why was I confident that the surgeon’s AI suggested treatment plan reflected the best approach?  The answer is because the surgeon using it is a leading expert and was using AI developed for this sole purpose.  As an expert they knew what inputs to give their black box, how to carefully review the output and would have well-developed intuition to challenge any output that many years’ experience told them didn’t look right. 

It’s a matter of inputs, intuition and experience

As an accounting expert witness, my colleagues and I have seen the output of reports generated by non-specialist AI used selectively and incorrectly.  It can support confirmation bias, “experts” using the bits that support their opinion and ignoring the less supportive.  That can be embarrassing when the same output is replicated and the unsupportive bits brought to light.

We have also seen lay clients attempt to generate their own reports, for example, for submissions in expert determination, where they’ve not fully understood what was needed, not used the correct inputs nor been able to critically appraise the output.

As a test, I used AI to create a proforma for certain mechanical tax calculations that I could perhaps use to save costs in multiple cases.  It generally looked good, but experience told me and, double-checking confirmed, the proforma wasn’t producing the correct answer for all years.  It was a good start and capable of correction, but would have been embarrassing had I used the unedited erroneous template in producing an expert witness report.

Learning from incomplete inputs?

Data security is always a concern for professional advisers.  The risks of AI using client data to inform its learning cannot be overlooked so most professionals, including me, will not allow client data to be entered into AI systems that could share/use that information publicly.  To my mind, that means some public AI systems are only learning from what people have allowed to be input to them, which might not include the most important real data, weakening the reliability of the output.

AI is continually learning, but I worry that some systems may learn from people dabbling, without the expertise to properly define inputs and critically appraise outputs, or that it is learning without the most important information being available to it.  Rightly or wrongly, I worry about AI learning from incorrect and/or incomplete inputs.

A helping hand but will improve

In the 1980’s, much of my analysis work was handwritten, and calculations made by calculator.  Easy to use word processing and spreadsheet technology came along as new tools, and made those processes more efficient and, usually, more dependable.  

For now, I see AI as another phase in technology improving the efficiency of our processes but, perhaps, slightly elevated from the category of a simple tool to the equivalent of a new junior member of our team (with the useful talent to paint a dog picture in Hockney style in under a minute).  The new team member can do research, but needs guidance and its sources validating.  It can help to process data, subject to thorough checking and confidentiality protections.  It can help with drafting, but only so far.  It can help check our own work, spotting things it thinks we’ve missed.  But, like any other team member, junior or not, AI seems fallible to me.  Like new junior team members, it will keep getting better with proper training and management.

Nothing better than the real thing….for now

For the time being, in our work as expert witnesses, whether being assisted by any AI or not, preparing the inputs for analysis and critically appraising the output requires the benefit of human experience and expertise gained over many years, not generated in seconds.  After all, as an expert witness, I want to be as efficient as possible but I am giving my own opinion, which can’t come out of a black box.

Please don’t hesitate to contact me or another of Quantuma’s specialist expert witness MDs if we can help with your dispute – our expertise has been gained over a collective 144 years.