In my article on affinity ethics I discussed how marketers protect the public image of their products, services, and brands. A customer would not want to his purchase a product or service from a company that does not reflect their own values. Not only is the brand’s own image important, but relationships with bad partners can drive away consumers too.
But when it comes to AI-driven actions, the mechanisms that trigger customer distrust of a brand occur differently. A full understanding of risks associated with AI and automation is still emerging. (Here’s how one major vendor is dealing with AI, ethics, and empathy.)
Data at the heart
Central to any AI discussion is data. Data is essential, of course, for AI-powered automation —
but in and of itself, data has ambiguity built in. Because of that ambiguity, ethical concerns can arise from its usage when AI is applied to marketing decisions. The consequences may not be apparent until the model is operational. The public may not fully understand the AI mechanisms behind the media they encounter. But the real world outcome from the AI becomes what customers use to evaluate the experience — and judge the consequences.
One way in which brands can damage their own reputations is when AI-powered decision making has the effect of evading regulatory compliance. Uber and Volkswagen both faced fines in that context. Transport for London, which oversees the city’s transit networks, blocked Uber’s application for a London operating license when it was discovered it was using programming in its app to avoid vehicle inspections. Meanwhile, Volkswagen admitted to installing engine management that allowed 11 million diesel-powered vehicles to cheat US emission testing. The brand reputations of both companies suffered extensively from the scandals.
Understand what AI is doing
Regulations that have emerged, such as the GDPR and CCPA, focuses on the consumer data rights. These regulations require AI operators to understand how an AI-powered system is using consumer data. That means, in turn, that marketers must be aware of their brand as, in effect, an AI operator.
Marketers must also understand how little data is needed to identify individuals. Latanya Sweeney, of Harvard’s Data Privacy Lab, has shown that individuals can be identified using as little as three pieces of data; for example, zip code, gender, and date of birth.
So what can marketers practically do?
Before deploying AI-influenced messaging and media, marketers should ask a few questions on how data use aligns with customer values, including permissions for data use. Here are examples of questions to ask. How are our brand values clearly represented when the data is in use? How does the outcome of that usage for us compare with what the customer experiences? But answering these questions will require a deeper assessment of the systems in use.
Data and ethics
AI relies on induction — the process of identifying a statistical relationship among data from a wide variety of sources — as a basis for its decision-making.It’s tempting to support this by storing large quantities of customer data in data lakes.
But customers this raises questions about brands’ judgment when it comes to collecting and storing data. The case of Amazon holding data from audio files, despite customer deleting the recordings on from Alexa and Echo, is an example. Amazon’s convoluted explanation arguably raises more questions among consumers on how much they should trust the company, especially when both its algorithms and its ethics are not as transparently visible as its marketplace presence: “When a customer deletes a voice recording, we also delete the corresponding text transcript associated with their account from our main Alexa systems and many subsystems, and have work underway to delete it from remaining subsystems.”
If ethics are principles that govern behavior, then the principles governing the decision a machine model makes should reflect those that the brand would make in the same circumstances. Examining a model’s decisioning process means evaluating bias and variance in the data. Variance is a measure of how far a data set spreads out from a calculated average value, while bias reflects the proximity of the observations of a data set to its average
If an AI model is to make decisions leading to a desired outcome, addressing statistical bias and variance is critical. The more variance in a given data set, for example, the better chances unwanted bias can be detected, and the better confidence you can have in the model’s choices. (Here’s a deeper dive into the bias-variance trade off).
Thoughtful evaluations can also help identify points of data drift — the sum of systematic changes from various sources that erode model performance. The changes come from well-intended model tweaks or system updates to data inputs like sensors. Minimizing data drift prevents unreliable model decisions.
Get a full picture
Marketers have the responsibility of understanding the environment in when AI is making important decisions. It is crucial to know how to interpret the data it’s using, the associated statistics, and the potential impact on consumers, whether from the perspective of reflecting their values or valuing their privacy. My article on social history gives some examples of societal consequences for getting it wrong.
To protect the brand and its customers, marketers must venture on a deeper understanding of AI to gain a full picture of what managing the ethics around it requires.