The fashion industry, as it has been digitised immensely, is subject to deep-rooted algorithm bias.
The advent of digitisation began in 2019, with companies and customers alike, noticing a trend of e-tailing. The process has further been escalated by COVID-19, reaffirmed through Mckinsey and Company reports, displaying a reduction in willingness to make purchases through physical stores by 70% and 80% in North America and Europe respectively. The mentality of digitising or perishing has led to a newfound module of communication and connection leveraged by brands in the media-driven age.
A pertinent issue conjoining with this operation is that of Algorithm Bias. This is a phenomenon through which unfair and bigoted data is re-filtered through the system, owing to predisposed societal prejudices. Owing to this, skewed product recommendations, size predictions and more are brought to the forefront, leading to a circular flow of old-age stereotypes setting back generations of movement.
This arises due to flaws in primary data available for AI, through which pattern recognition and learnings are conditioned to the predominance of skin tones, body types, sizes and more.
This impacts 3 essential areas of business- Consumer perception, Demand Forecasting/ethical implications as well as dangers to society.
Under consumer perception, the three subsets are brand equity, value equity, and relationship equity, all contributing towards enhanced consumer imagery and subsequent acquisition, coupled with repeat purchases. The A. I model providing recommendations to such consumers are trained on data that is considered restrictive and non-exhaustive. The mind perception theory is affected, with factors of agency and experience being affected due to this occurrence. An unfavourable experience leads to a reduced intention towards the capacity to act and hence, creates algorithm aversion as opposed to the desired algorithm appreciation. Human agents are deemed more trustworthy in the retail sense, proving to be detrimental due to technologically predominant systems. Not only this, empirical evidence collected through the study of Roza Do (2020) suggests that implementation in an undesirable manner may lead to negative biases against a group of people belonging to a particular socio-economic class.
The existence of such factors leads to a contoured perception due to algorithmic flaws, which are further exemplified through data collection bias and feedback loops, setting society back to a dystopian state of regression.
While the concerns relating to micro-trend predictions and demand forecasting are tackled seamlessly due to algorithmic cognizance of consumer impressions and social media-driven data analytics, several ethical implications arise. At the backend, AI and algorithms work due to image recognition and processing, predictive analysis and trend forecasting, personalisation and recommendation systems, Natural language processing as well as generative adversarial networks. The market segmentation and targeting capabilities of a company are interjected, due to increased focus on particular demographics, sizes and styles, creating a lack of capitalisation of opportunity for underserved markets.
Not only on the management forefront, the legal and regulatory compliances are constantly undergoing amendments, for stricter regulations to ensure measures of privacy and more. Overlooking such changes is a potential mishap many companies undergo, opening the grounds for legal liability.
From a societal perspective, the age-old stereotypes are reinforced. The unrealistic and hyper-realistic expectations about thinness, skin tones and sizing perpetuate lower self-esteem and dangerous societal repercussions. Studies suggest a correlation between self-esteem and purchase behaviour, and such a glamorization of irrational standards impacts overall customer behaviour. Along with this, cultural homogenization is an issue on the rise. Homogenous trends tend to overpower niche subcultures, stifling unique identities, and creating a fraternity of redundant trends. The further impact on diversity and representation leads to issues with identity formation and lack of self-expression, forcing individuals to conform to standards and beauty trends that are not to their liking.
One of the ways to combat this bias is to heavily invest in algorithm personalisation, through individualised pattern recognition and demographic information to feed accurate and sound recommendations.
Despite measures, the resolution of algorithm bias is a long way ahead. The perpetuation of stereotypes, ethical and legal concerns and overall pitfall of consumer trust and acquisition is an inevitable outcome of digitisation. Skewed primary data and distorted interpretation creates issues for consumers and companies alike, in the fashion industry. Eradication is far-fetched, but an achievable outcome for the lucidity of society as a whole.
References-
- Advancing algorithmic bias management capabilities in AI-driven marketing analytics research
https://www.sciencedirect.com/science/article/pii/S0019850123001566
- Algorithmic abstractions of ‘fashion identity’ and the role of privacy with regard to algorithmic personalisation systems in the fashion domain.
https://link.springer.com/article/10.1007/s00146-021-01235-8
- The ultimate guide to fashion digital transformation
- Fashion in the age of algorithms: Balancing Technology, Creativity and Sustainability
https://www.smartfashion.news/blog/how-algorithms-are-shaping-the-future-of-fashion-trends
- Why should we address algorithmic bias in fashion?
Author: Kanishka Chawla, Student Editor