Mark Evans takes a comprehensive view of applications and challenges, from quick wins in communication to managing agentic misalignment
Each month at WP we offer a slate of articles and content pieces that go deep on a particular topic. This November, we’re exploring the uses of artificial intelligence in wealth management.
Mark Evans doesn’t engage in hype over the latest shiny object. The President & CEO of Conquest Planning has helped build one of Canada’s largest wealth management software platforms. Despite the wider tech industry’s propensity towards presenting advances as silver bullets and marketing on an ethos that newer is always better, Evans’ presents a view of advancing technologies that balances promises against risks, highlights areas where hype might overrun current capacity, and identifies where new tech can drive real value for advisors. Amid a generational level of technology hype, Evans is now applying that lens of realism to generative AI.
Evans outlined how generative AI (gen AI) is already being used in the Canadian wealth management space, as well as how Conquest has started integrating gen AI tools in their platform. He explained where he sees the tech being applied in future and how it may change advisors’ practices further as its uses are honed. He also outlined some of the risks that come from the use of, or reliance on, generative AI and explained how firms like his can protect against them.
“With all technology there's that initial hype phase, then there's the reality phase, and then there's the applicability phase where we can really see where value is showing and that there is either cost efficiency or improved levels of service quality,” Evans says. “I think we're definitely in that shiny object phase right now. And part of the problem is a lot of the Gen AI groups out there like OpenAI and Anthropic and so forth, they're all trying to promote AI, AI, AI, AI. So they'll push whatever they can. I'm not saying it's bad. You’ve just got to cut through all the hype to get to the reality of it.”
Evans explained that generative AI is already being used by wealth managers in a few less sensitive areas of their work, namely in notetaking and meeting summaries. He notes that some of the initial uses of these AI-generated summaries resulted in poor quality output, but that as the technology has advances and these large language models (LLMs) have learned more, their summaries have become increasingly reliable for advisors. Advisory firms, he says, remain understandably cautious about using AI tools in areas of greater risk to clients, like portfolio management, where the mistakes and learning curves inherent in applying an AI might result in poor outcomes for clients or even compliance violations.
Advisory firms, he adds, also have to stay cognizant of where generative AI is actually being used and where AI is simply a label being slapped on a standard piece of automation software. He uses the example of conquest itself to show where functions are still driven by automation and where new functions are using gen AI.
Conquest, Evans explains, uses automation software to help collect and organize client information, before applying it to their financial plan and extrapolating out key planning models. The software allows advisors to run tweaks and strategy changes, showing what the impacts of those changes would be across different time horizons. All that functionality doesn’t involve AI. Now, however, Conquest is layering in an LLM that can read client information and answer advisor and client questions about possible changes to the plan. It can summarize information and provide clearly communicated insights. For example, if a client was curious about lowering their rate of monthly investment contributions and raising the mortgage principal payments on their home, the AI model could tell them what that tweak would mean for the existing financial plan across multiple time horizons.
Even as his team applies gen AI to summarize communication, synthesize client information, and answer planning questions, Evans is looking for the next area of application. He expects that once LLMs have mastered the client communication and analytics side of this industry, they will start to be applied on the fulfillment side. Where now the LLM could tell a client and their advisor what higher mortgage principal payments would mean for them, Evans expects that in future the LLM will be able to execute on that adjustment in monthly contributions. He notes, though, that this application will have to be executed with immense care as it risks the AI executing on the hypotheticals it’s exploring rather than approved decisions.
Evans is also aware of a new set of risks that come with using AI. Managing data security is always a paramount concern for wealth firms, and ensuring that AI applications are gated and ringfenced to protect internal data is key. There are also new emerging risks, however, including the phenomenon of ‘agentic misalignment’ where an AI agent given enterprise-level visibility and autonomy will act to preserve itself rather than in the interests of the organization. Evans believes that the ongoing research into this phenomenon may “make or break” the fulsome adoption of AI across industries. He likens agentic AI to having an employee with access to every aspect of the enterprise and no oversight. He advocates for keeping checks and boundaries applied to AI tools, keeping tasks and functions specific rather than making an AI autonomous and agentic.
Despite some of the risks of AI, Evans sees real advantage in applying some of these tools. He highlights a particular opportunity in serving mass market clients. This segment has been increasingly deprioritized by the industry, as often the small account sizes result in losses for advisors and firms. However, if those accounts can be managed more efficiently with AI tools supporting the advisor, then there may be a renewal of interest in a segment that has been left for low-cost brokerages and online ‘finfluencers.’ He emphasized that whatever client segment or aspect of work an advisor or firm might be considering AI for, a balance between cautious optimism and healthy skepticism can be beneficial.
“There’s lots of hype out there saying this is a silver bullet, and there are some naysayers who will say that AI will never help them, and some people on the extremes who say this is going to replace all human advisors,” Evans says. “Any extreme is typically not a good position to be in. If you land somewhere in the middle, you should be exploring the tools and finding out where the value lies in them. But, at the same time, you should caution yourself against thinking you can just adopt something and get rid of your assistant. You’re going to have to do things incrementally, with ways to measure positive and negative impact and to have an open mind about everything.”