It is without a shadow of a doubt that ethical scandals in the tech industry have become the norm everywhere; every week, a new story surfaces about how the titans of social media have been breaching the ethical code of its users.
However, we need to ask ourselves what “ethical tech” actually means?
“Technologies have a clear moral dimension—that is to say, a fundamental aspect that relates to values, ethics, and norms. Technologies reflect the interests, behaviors, and desires of their creators, and shape how the people using them can realize their potential, identities, relationships, and goals,” The World Economic Forum said in a report about clean tech.
Nobody has expressed the meaning of ethical tech as accurately as DigitalAgenda, a UK-based clean tech think tank, which believes that “ethical tech is, at its heart, a conversation focused on the relationship between technology and human values, the decisions we make toward technological advances, and the impacts they can have.”
According to report by the think tank, the concept of ethical tech is related to a set of values. The notion of ethical tech refers to a set of values governing the organization’s approach to its use of technologies as a whole, and the ways in which workers at all levels deploy those technologies to drive business strategy and operations.
In addition, leaders’ biggest social and ethical concerns brought about by digital innovation apart from privacy, are related to cybersecurity risks, job replacement and the use of data.
Thus, with that power comes immense responsibility to construct a more just, free, and prosperous online space than what we currently have; and this is starting to show within the ranks of the world’s biggest companies.
How many times have we seen employees from Google, Facebook, Pinterest, Amazon and the like publicly protest and stand against the policies and behaviors of their employers on ethical grounds?
This was brought more to the forefront after Netflix’s aired its new documentary called, “The Social Dilemma” which showcased previous employees who had worked under these titans of tech.
With very minimal protection guaranteed from the industry, this leaves the regular consumer to constantly keep themselves up-to-date and weary regarding their online behavior and how the algorithms at play are shaping what we feel, think, see, hear and experience.
Let’s jump into the basic red flags that people should be aware of.
In 2020, anyone who’s remotely tech-savvy and keeps up with the news is aware that private companies such as social media platforms and mobile service operators are collecting massive heaps of data over your every online footprint.
From real-time location tracking, communication, what you post, what you like, what you ignore, and how long you linger on them before making a decision; that information is taken and sold to a handful of other entities – including but not limited to – law enforcement, the intelligence community, advertisers, political campaigns, and more.
And all of this is done without the proper consent of its userbase.
People might not think what’s at stake.
It’s not only about collecting your data to perfectly place which ad you’re going to see next when you’re mindlessly scrolling down your preferred social media platforms; it runs much deeper than that.
This information can be used in a plethora of ways against its users; law enforcement in some countries can access the data and surveillance technology to track and keep tabs on protestors, journalists, persons of question, and the like, which is a complete breach of their very basic human rights.
The trickery of deepfakes
Deepfakes is the use of media clippings, such as a photo, audio or video recording of someone and using it manipulate what the person is saying and doing by swapping out their likeness for another person.
A perfect example of this was seen back in April of this year, when State Farm aired a controversial TV commercial that appeared to show an ESPN analyst making shockingly accurate predictions about the year 2020 in 1998.
The fact that this is becoming a new trend is legitimately scary.
Another deepfake video surfaced where Belgium’s Prime Minister Sophie Wilmès links COVID-19 to climate change. In one particularly frightening example, rumors that a video of the president of a small African country was a deepfake helped instigate a failed coup.
Fake news is still in its prime
Fake news is alive and kicking.
We’ve seen it meddle with elections far and wide, start trade wars and many other real-world repercussions that society hasn’t been able to successfully flag most of the time.
Between 2015 and 2017 Russian operatives posing as Americans successfully organized in-person rallies and demonstrations using Facebook. In one instance, Muslim civil rights activists counter-protested anti-Muslim Texas secessionists in Houston who waved Confederate flags and held “White Lives Matter” banners.
Russian disinformation operatives organized both rallies, and cybersecurity experts predict more to come in the run-up to the 2020 elections.
It has now become the norm for product managers, designers, tech marketers and start-up founders to tirelessly create user experiences that would be physically and psychologically impossible to put down.
While the people behind the building blocks of these platforms see dollar signs in the distance, we need to also weigh the matter of their long-term effects on the end user.
This kind of tech is being labelled as “habit-forming products;” while they are not all bad, people need to be able to personally assess when a habit is becoming toxic.
It isn’t rocket science to assume that social media has become a common trigger for psychological conditions such as anxiety and depression – the studies speak for themselves.
In these times, there needs to be increased digital media literacy through education, seeing that people aren’t well informed enough to fully understand the level of influence companies have on our personal decisions – from what brand of shoes we decide to buy, to which president we decide to vote for.