Tech companies operate on the cutting edge of innovation, and often venture into new product areas at a pace with which regulation cannot be expected to keep up. So it frequently falls to the companies themselves to act with care and prudence, and marshall against unforeseen consequences; something they have often failed to do.

The potential ethical issues that tech companies face are so varied and constantly shifting that they cannot all be listed here; what is most needed is for these companies to develop an approach to innovation with societal welfare at its heart. The Ethical Explorer Pack has been launched to help all tech workers - engineers, product managers, founders - to grapple with many of these issues.


We share and generate more information than ever before, leading to huge troves of sensitive data about each of us. This creates a range of potential risks about what companies do with that data, how they communicate the uses of that data with data subjects, and how that data is protected.

So far, many companies’ records on privacy issues have been pretty poor. Lengthy terms of service agreements have sought to insulate companies from risk while leaving them as unfettered as possible and failing to inform consumers of how data would be used. Alongside these pervasive problems, huge data breaches at the likes of Equifax and Facebook have had disastrous consequences for millions of people.

As the rise in remote working leads to more sensitive data being collected, companies have a role to ensure this is done in an informed, safe and proportionate way.

The Ranking Digital Rights (RDR) project has a detailed chapter on privacy, the various challenges attached to it and some recommendations for company action. For an alternative take, Amnesty International has looked specifically at Facebook and Google and the risks of their current approach to privacy.

RDR produces a ranking of large technology companies from around the world on their approach to privacy.

  • How clearly does your company communicate with data subjects on how their data will be used? Are data subjects able to effectively limit sharing of their data?
  • What is your company’s policy on sharing data with third parties, including governments?
  • Are your companies’ policies different in jurisdictions where legal protections are weaker?

Content moderation

The internet has given anyone in the world the power to share anything with anyone. While the enormous benefits of this should not be forgotten, it has led to a host of problems as well, including giving a platform to disinformation and disturbing content.

There are no simple solutions to these problems. While highly inappropriate content naturally needs to be policed and taken down, there is a clear human cost to those responsible for this task. There are also free speech concerns, as something may be provocative and controversial without meriting censorship. Disinformation is another major issue, as social media sites in particular have become hotbeds of fake news, and have led to a situation where different tribes have completely different versions of reality due to the information they see online.

At a minimum, companies need to have robust processes in place to ensure these decisions are taken properly. Having teams of content moderators with mere seconds to review flagged posts does not meet this bar. Many of the companies involved are highly profitable and worth billions; they need to invest resources proportionate to the problem they have helped create.

New America has published a report setting out the influence content platforms can have on democracy, and suggests solutions balancing free speech, privacy and government surveillance concerns.

False information broadcast by tech platforms may be hindering progress on all other issues in these pages, by reducing public support for solutions. Fake news has been found to travel further and faster than true news. Tackling political misinformation is neglected in poorer countries posing less of a PR risk. Many platforms allow for the monetisation of climate misinformation (Google have notably committed to ending this practice). This results in greater polarisation and the continuing election of climate deniers as political representatives.

  • What is your company’s approach to moderating content? Who inputs into policies around how content is moderated?
  • Is your company transparent about the variables that influence any content moderation algorithms?
  • Does your company allow the promotion and/or monetisation of false or misleading content?

Manipulative techniques

Many tech companies design to maximise “engagement”, the amount of time spent using their products. Obviously making products so good that people will want to use them again and again is generally a good thing; it becomes an issue when they seek to take advantage of human vulnerabilities to manipulate people to continue using products beyond their rational self-interest. As the CEO of Netflix once said, its main competitor is not another tech company, but sleep.

The attention economy has initiated an arms race for people’s attention and time. Unless companies voluntarily step back and impose some self-control, the techniques intended to keep us clicking, buying, watching are only going to become more refined and intrusive. Companies should not seek to maximise the amount of time spent on their sites, but rather should optimise time well spent.

The Center for Humane Technology are developing a range of resources on this issue and how to design in a more ethical way. There is also a ledger of research on the harms caused by manipulative tech. For a quick primer on the topic, there is this 1843 article or a TED talk. A study of 11,000 e-commerce sites sets out some common “dark patterns” used to steer consumers into making unintended decisions.

Young people are particularly vulnerable to these techniques which companies increasingly rely upon. While it ultimately falls to parents to ensure their children use products responsibly, it is companies’ choice whether they enable or hinder parents’ efforts.

  • What are the metrics your company uses to define success regarding its users? Are those metrics aligned with the users’ own interests?
  • Do your company’s products rely on techniques such as variable rewards, push notifications and continuous streaks that are designed to get users “hooked”? Are children users of these products?

Anti-competitive practices

Big tech companies occupy a dominant position in many markets, with search, shopping and social media obvious examples. While that position has enabled them to deliver savings to consumers, it has also enabled them to undermine competition by using their platforms to favour their products over others’; collect and hoard data that, were it more freely available, could be used by other companies to deliver improved services and public goods; and ape or acquire start-ups that pose the smallest threat to their dominance.

This is primarily an issue that regulation should be stepping in to handle, and some tech companies have received 10-figure fines for anti-competitive behaviour. But by the time policymakers step in irreparable damage has already been done, smaller competitors have been put out of business. Tech companies need to keep their greed in check and not act like a rapacious AI programme bent on taking over the world.

This is a multi-faceted problem; the form it takes can depend on the type of company involved. For the very biggest companies, a rundown of some of the issues is here. Specific practices include companies’ approaches to gathering and hoarding data; copying competitors’ products and then privileging the new product on their platform; introducing switching costs to prevent customers from easily changing service providers.

Where it would not interfere with users’ privacy, openly sharing data can both create a more level playing field and lead to much more innovation, as knowledge flowing from data would not be restricted to those companies that generate the data. Allowing data portability, such that users can easily export their playlists, social network connections, etc to another service would similarly encourage competition and improve consumer welfare.

  • Has your company erected barriers to customers switching to competing services? For example, is it straightforward for users to move their data to another service provider?
  • Does your company prioritise its own goods on platforms it operates over that of potential competitors?
  • What is your company’s lobbying position on regulations that would improve interoperability between different platforms (e.g. sending messages between different messaging apps)?


An increasing number of decisions are being taken or influenced by input from algorithms, from the online advertisements you see to criminal sentencing and parole decisions. These algorithms are generally trained on large amounts of data, which has led to algorithms encoding and perpetuating the disadvantages already faced by certain groups, particularly along the lines of gender and race.

The companies behind these algorithms need to stop building the failures of the past into the infrastructure of the present and future. Companies must dedicate resources to ensure that algorithms are not producing inequitable outcomes for at-risk groups, and open up these “black boxes” to external scrutiny.

There is a wealth of research on how algorithmic bias can lead to inequitable outcomes in different settings - advertising, criminal sentencing, financial services, recruitment.

Transparency needs to be a minimum requirement to ensure that other parties are able to access the datasets used to train algorithms and the decisions they are producing to detect and prevent instances of bias.

  • Are there processes in place to test whether algorithms or AI services developed by your company are producing different outcomes for at-risk groups?
  • Is there third-party research into issues with algorithmic bias in your field? Has your company addressed any issues identified?