Hitmetrix - User behavior analytics & recording

Why Social History Is Important For Retail Trust

Society and culture are becom­ing factors in algorithms in ways that marketers have not anticipated. It now means brands must not be on autopilot when employing algorithms for deliver­ing customer value. To do so can put brand image at extreme risk.

For example, Bloomberg report­ed the Amazon Prime same-day delivery service program over­looked African American neigh­borhoods in major cities. The underlying algorithm was meant to select neighborhoods to launch the service, but critics quickly pointed out that its output mim­icked a long-standing pattern of redlining economically challenged communities in order to select the “best” service recipients.

Political leaders soon took action. Boston Mayor Martin J. Walsh, and Massachusetts Senator Ed Markey requested Amazon provide Prime same-day deliv­ery to Boston’s Roxbury neigh­borhood, one of the traditional African American neighborhoods that the algorithm had excluded.

Data scientists are generally discovering ways to better society by using algorithms to model the world more accurately, based on data. For example, only recently has image data become associated with latitude and longitude, with GPS information being tagged to images by mobile devices with cameras. That capability, combined with facial recognition technology, has been used to map global warming impact on the cor­al reef, in an effort to save the reef from environmental destruction. Marine researchers can even 3D model the environment.

But within marketing, under­standing the potential impacts of algorithms requires different tactics from past practices based on web analytics. Web analytics have evolved, with more sources for metrics, but the central premise has always been to provide descriptive analyses — dashboard reports that place data in pre-formed templates. Analysts attempted to infer customer interest and intent from the performance of content associated with a particular product or service.

Machine learning models also rely on data, but the quantity of data and speed of processing makes them exponentially smarter, and capable of making associations at scale. Unfortunately, that very scal­ing can mean basic errors leading to over-extended, and uninten­tionally prejudiced, conclusions.

Statistical theories can be used to explain errors and anomalies. An imbalanced classification, for example, explains when an algo­rithm does not include necessary information about a minority class of data, and therefore cannot make a reliable prediction.

But what recourse is available when data represents the norma­tive assumptions of a persona, or a demographic? How are value judgments to be fairly represented mathematically? Bias — a model’s tendency to consistently learn the wrong thing, by systematically failing to take into account all the information in the data — can occur if these questions go unanswered.

The pressure to find an answer before a bias develops and hits a brand’s reputation is high.

Brands are recognizing that they have to be responsible corporate citizens. Today’s customers have the belief well in­grained that uneth­ical businesses do not deserve their dollars.

Solving the problem of bi­as-based prej­udice involves some technical questions. In the context of algo­rithms, it involves an ethical process that means inspecting data assumptions, and probing the way those assumptions are mapped in an algorithm.

That makes cleansing training and validation datasets extremely important. These datasets, in a sense, are meant to represent what you want to see in the data. But as data becomes entangled with social issues, it begins to incorporate the human-based associations from those issues, some which provoke ethical concerns or raise difficult issues of social history.

Thus new questions arise: Is the validation set representative of real world conditions? Do enough training set observations exist to accurately select a suitable model?

The ultimate deciding factor must be a sensibility for social history in the data process. That awareness can be necessary in training datasets, and resisting bias.

Data analysis has to be more than the traditional statistical observation of mathematical correlations. We need to realize when danger is seeping into the algorithms used to market and sell our products and services.

Data has begun to ingest our norms and values, so we have to ask if these values are being accu­rately represented mathematically, especially as marketers increasingly use mathematics at the service of our brands.

Ultimately, examining the math behind an algorithm through the lens of society and culture will help prevent algorithms betraying brand image – and even betraying the values we hold most dear.

Total
0
Shares
Related Posts