If you’re building digital products for the financial services or tech industry, you might be wondering what role AI is going to have in them. Before adopting any new technology, it’s a good idea to explore its risks, limitations and challenges. We’ll be doing that in this post.
The AI disruption is here to stay. But for an industry that’s as heavily regulated as financial services, can AI realistically be used? Or is it too much of a risk when customers’ money and other sensitive data are on the line?
This post will explore the risks and challenges that will limit how you use AI (for now, anyway).
Although AI is often heralded for all its time-saving and data-processing benefits, there are serious risks involved with using the technology. They’re made even more serious when used in fintech and FinServ.
Here are some of the challenges and risks to keep in the back of your mind as you consider the use of AI when developing fintech products:
The financial services industry is one that’s highly regulated. To make things more complicated, there are additional regulations you need to adhere to when developing digital products.
Protecting your users’ data and financials is serious business. And while you may have accounted for it when building your digital product, AI is another story.
Unless you’re building your own AI, you now have to worry about whether or not the third-party AI solution will throw your product and the company attached to it out of compliance. With so much private data flowing into fintech products, an insecure or non-compliant AI can cause big problems for your company.
The first thing to do is evaluate the process you want to attach the AI to.
For instance, some website and app chatbots handle technical support requests like, “How do I make a deposit?” If the AI is programmed to only handle questions of this nature, then things like GDPR and PCI DSS compliance may not be as big of a hurdle.
However, let’s say you implement a predictive analytics solution to provide customers with insights about their spending habits or projected investments. In that case, you do need to be concerned with data security and compliance. The second your customers’ confidential and sensitive data flows through the AI, you need to consider what happens to that data along the way.
It’s not just security breaches via the AI you must worry about either—it’s also misuse of that data. So, the next step to take is choosing an AI partner that takes compliance and data security seriously.
Another thing you’ll need to do is to develop an AI policy so that customers are aware that it’s being used and for what purpose. You’ll need to explain more than just the basics. For instance, the policy should describe how the AI is trained, which data it has access to, and where and for how long the data is stored (if at all).
If you can’t answer those questions, you should probably rethink the solution you’ve chosen.
One of the reasons why artificial intelligence has been adopted en masse is because of its massive data-processing capabilities. However, in order to process this data, AIs need to be trained.
But just as improper training of an employee can lead to poor outcomes, the same can happen with AI. And on a much larger scale. What’s worse is that the AI often can’t explain how it comes to certain decisions.
Here are some of the more common issues that users encounter with AI output:
So, you have to be very careful about entrusting AI with critical tasks and then trusting the judgments and outcomes.
To deal with these issues, the first thing to do is start small. Use AIs for simpler, more straightforward tasks. For instance, support chatbots that provide users with a set of prewritten choices would be safer to use.
Then branch out. The next step might be implementing natural language processing in your app’s search functionality. The AI would then be trained to pull matching or near-matching results directly from your product documentation or FAQs.
Then you could move into some personalization in terms of marketing and sales. Using data your customers have shared with you, you could program your AI to display customized offers and related financial products in their dashboard.
A bad experience with an AI or content generated by an AI won’t reflect poorly on the AI in your customers’ minds. It will reflect poorly on your financial organization. So, it’s important to be careful about how much you entrust to AIs.
If you’re thinking about using electronic Know Your Customer (eKYC) tech, then inclusivity/exclusivity could become an issue. These technologies are used in regulated industries like banking to verify the identity of the applicant or customer.
There are different forms of eKYC used. For example:
If any method is required (and with no alternative allowed), it could exclude certain individuals from being able to access the digital product or service.
For example, a person with speech limitations wouldn’t be able to provide voice recognition. And someone without a smartphone or a printer wouldn’t be able to readily submit the required documents.
You also need to think about how trustworthy and accurate AI is when it comes to verifying identities through eKYC. It could easily block someone’s access to a fintech product if the biometrics don’t match what’s in the database. Or, worse, it could give them access to the wrong person’s account (if, say, their faces or voices were too similar), putting user data and privacy at risk.
AI has certainly had a big impact on the design and writing industries this past year. While some companies have flocked to this cost-efficient technology to generate websites, graphics and copy for them, there are risks in doing so.
For example, generative AI reproduces existing content. There’s nothing original about it. AI is also incapable of empathy. These are two critical components of the work we do when creating digital products for our clients or employers and designing experiences for their end users.
That doesn’t mean that AI can’t be used to help speed things along. For instance, you can use AI to do things like:
However, it’s too risky to rely on AI to create final customer-facing designs or copy for the websites and apps you design. You could end up with a product that feels cold and disconnected or, on the flip side—one that feels intrusive and creepy.
There are just certain aspects of digital product development that require the human touch. And when you’re building products where establishing trust and conveying empathy are as critical as they are in fintech, you can’t delegate this kind of work to artificial intelligence.
Now that you know the limitations and challenges of AI in fintech, let’s briefly talk about realistic use cases for the technology.
Fintech chatbots are already being used by a variety of apps and for different purposes. The linked article shows companies like Venmo, QuickBooks, American Express and Bank of America using virtual assistants to streamline online customer support.
When developing fintech products for internal teams to use, chatbots will also come in handy. For instance, newly onboarded employees can refer to chatbots for questions on how to use the app rather than pestering their coworkers or managers for help. They can also be used to enhance internal search functionality, analytics dashboards and more.
Being able to develop a fintech product that serves dynamic content based on who is using it is a huge advantage. You can show your users customized offers and discounts based on their needs and personal circumstances. You can also display more relevant content like articles, videos and reports based on their data.
This is where we start to get into those murky areas with AI. However, if the data is innocuous enough (like a users’ content preferences, general account type and basic demographics), the risks of using it might be slim.
AI can be helpful when it comes to tasks like market analysis and developing user personas. You can also use it for its predictive qualities. To do this, though, you’ll need to combine general market data with existing customer data. That way, the AI can more accurately profile prospective users and predict what they’re looking for and what will convert them from leads to users.
This type of AI machine learning would be most valuable when redesigning and refining fintech products. You can use them for the product itself, and all the sales and marketing experiences that lead up to it (like the website, social media ads, etc.).
Predictive AI can also be useful for your end users—specifically if you’re developing a product that presents users with historical data and recommendations.
Take, for example, a budget-tracking app like Mint. Users expect to find certain features in an app like this—one-click sync to their bank accounts, categorizing of expenses and spend- tracking. Predictive analytics would enable the app to take that user’s historical record and anticipate spending surges (like around the holidays) and make recommendations on how to save money or cut expenses in the interim.
Unlike predictive analytics for sales, though, this type of AI would need access to real user financials. This makes finding an AI solution that’s responsible, secure and compliant crucial.
The financial services industry has already begun to adopt artificial intelligence technologies. And there are plenty of reasons to tap into the innovation, from lowered costs and increased efficiency to faster data processing and more.
But before you go implementing a new AI solution into your fintech product, consider the known challenges first. Data security, bias, exclusivity and a flawed user experience are major risk factors. Some put your end users’ data and financials at risk, while others could hurt your brand.
As with all new technologies you encounter, ask yourself: Is this particular AI solution worth it? The list of risks and use cases above should get you thinking about practical and safe ways to start using it.
The information provided on this blog does not, and is not intended to, constitute legal advice. Any reader who needs legal advice should contact their counsel to obtain advice with respect to any particular legal matter. No reader, user or browser of this content should act or refrain from acting on the basis of information herein without first seeking legal advice from counsel in their relevant jurisdiction.
A former project manager and web design agency manager, Suzanne Scacca now writes about the changing landscape of design, development and software.