During our scrum today, one of our developers mentioned that ChatGPT restored its image recognition toolbox in its v4 release. Such tools have been around – albeit in a primitive state – for several years. But the ChatGPT phenomenon has accelerated development and mainstreamed the discussion about AI capabilities to a near-fever pitch.
At VeracityID, we have been capturing mobile images of assets and documents from insurance applicants and analyzing them using AI to assess identity, asset condition, residency, ownership and other key insurance information for almost 4 years. We’re currently working on an update to our image capture/analysis platform, and I asked our data science lead whether we should look at ChatGPT v4 as a low-cost alternative to in-house development.
His response was a cautionary note for everyone considering using an Open Source-like tool like ChatGPT to assess risk. ChatGPT is attractive because it’s easy for anyone to try, and it shows really well to non-IT folks. But anything uploaded is used to train their model(s), not yours, and control of information uploaded is effectively transferred to a third-party – likely without informed customer consent and not for the intended purpose of offering insurance. They apparently exclude personal photos – but what about structured data ;ike vehicle registrations, proof of address, insurance docs, rental or purchase documents, etc? What happens if their content ‘leaks’?
Under GDPR, CCPA and other privacy rules/statutes, information that could be used to identify individuals is considered “Personally Important Information” and should be handled with due care by financial entities (insurers, brokers, public claims adjusters) that collect it. Transferring images containing PII to third parties without appropriate controls (including consent) almost certainly violates those statutes – and I believe is likely to get the attention of the legal (aka class action lawyers) and regulatory (aka DOI) communities.
What should you do if you want to include image capture/recognition as part of your risk assessment process? The answer is pretty clear: build and train private version of ChatGPT (or a comparable alternative) yourself, or partner up with a secure vendor like VeracityID that already has these capabilities.
Obviously, we prefer you choose the latter. But whatever choice you make, we strongly suggest anyone considering using an open-source AI tool for risk assessment consider whether the information shared could result in an unauthorized disclosure that may result in fines, reputational calamity, career ending ‘oops’ moments and worse.
Look for more about AI from us in the days ahead. VeracityID is doing some truly innovative work in this area, and I hope you will share the journey with us. In the meantime, let me know what you think.
Talk to you soon.