AI presents opportunities galore to SaaS vendors, and ethical use is very much en vogue.

Most will do the right thing, but this is new ground and it’s all too easy for it to go horribly wrong.

For many, the biggest risk is that they don’t understand the implications of implementing AI systems, and there are many of those.

Just ask Dr Andreas Cebulla of Flinders University.

He’s been working on a model to help businesses ask the right questions and he’s been ably assisted by myself and AI and ethics specialist Matthew Newman.

In its current draft form, he uses the word “risk”, and the reasons are clear.

For all the good that AI offers in terms of efficiency, accuracy, and insight etc., poor implementation can easily disenfranchise the team or customers.

And when it goes wrong, people will want answers.

For SaaS vendors, this will be the moment of truth.

Transparency and accountability will be everything.

Of course, the funny thing is that customers don’t care that you use AI, and some may even be turned off by it.

It’s why I dislike the use of .ai domain names.

Customers just want technology that solves their problem, and effective support when someone spills their beer.

So, do you have your AI bases covered?


0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply