Ethics in Applied CS

Update: Slides and lecture videos from the course conducting during August-November 2021 are available here.

There is growing concern that although rapid information technology development has produced amazing outcomes, there have also been significant harmful effects for many reasons. Technology providers may be unable to control what the technology gets used for and by whom, they may not understand the limitations of their own technology, the regulatory response of the state might be too slow, and the technology may increase inequalities by servicing the rich and privileged much more than the poor and marginalized. Even technologies specifically designed to address certain social development objectives and reduce inequalities, may fail to do so.

Technology developers have a crucial role to play in addressing some of these challenges. In this course, we will specifically try to outline for students studying computer science and who are about to step out into the industry, several faultlines where applied CS engages with ethical questions, and describe some of the latest approaches to dealing with these questions. It is important that the future technology developers are made aware of responsible practices to follow, and which they should enforce in their work environment to ensure that the companies they work for, engage with these ethical concerns. Other than touching upon the following topics in the course, this will also be an attempt to catalyze change from within the IT industry by sensitizing the future engineers about their responsibilities.

Our goal in the course is to just make students acknowledge that there can be ethical implications in what technology is built and how it is used, and to be able to raise questions whenever technology is projected as being able to solve world problems. We will not aim to train students to solve ethical dilemmas for themselves, that would be a more challenging exercise best done probably through a course on the philosophy of ethics.

The course was first taught during Jan-May 2020. If you have any suggestions then please do write to me. The motivation to put this together came from a working paper/essay I wrote in Oct 2018, on ensuring responsible outcomes from technology, and followed up with some more writeups on modeling power relationships into the design of technology, how design alone is limited though in ensuring responsible outcomes, and about the need for technologists to persuade through collective action their organizations and wider democratic politics to be more responsible. I have now attempted to write these ideas into a book on the need for a new paradigm for technology design and management to prevent disempowerment effects that technology often brings. More info about the book is here: Technology and (Dis)Empowerment: A Call to Technologists, feedback will be greatly appreciated, and I'll be happy to send you my local electronic version.


1. Introduction to the concept of ethics

  • Today's context: We will start with reading a few pages from Norbert Weiner's The Human Use of Human Beings which arguably laid the foundation of thinking about the societal benefits and risks of automation, and challenged scientists to own up the responsibility of what their discoveries could be used for. We will also look at an interesting view of why the information age of today might be unique in the ethical questions it raises, many of which are captured in The Online Manifesto curated by Luciano Floridi.
  • Background of ethics: We will then understand that there are indeed different approaches to even thinking about ethics, including the concepts of consequentialism and utilitarianism, virtue ethics, and deontological ethics. We will then look at the key ethical principles in the health industry of autonomy, nonmaleficence (do no harm), beneficence, and justice, meant as guidelines to govern the day to day actions of health practitioners. We will similarly read the ACM (new) and IEEE codes of ethics as equivalent guidelines for engineers and computer scientists. An amazing set of interviews by Michael Sandel is a must watch.
  • Key take-away: We want the students to understand the broad philosophical framing about ethics, and how especially the use of information technology raises many ethical questions.


2. Examples of points where applied CS engages with ethical questions, and how to deal with them

  • UI design: Students will be introduced to topics like good and bad nudging, consent for use of data, autonomy of users, privacy concerns, etc. A motivating example is a very good read about Uber's use of behaviorial economics to design UIs for their drivers. We will also read a few pages from Nudge, by Thaler and Sunstein, and the ethics of persuasive technology. We will then look at Duquenoy's approach to design using the idea of justice to understand the responsibility that designers actually possess when building systems for others.
  • Data and algorithm design: Students will be introduced to topics like predatory advertising, bias in prediction systems due to an underlying bias in the training data, missing feedback and grievance redressal on incorrect decisions made by the algorithms, and how this can increase inequality. We will read some chapters from Weapons of Math Destruction, by Cathy O'Neil. We will also learn about technology based solutions to address some of these concerns, like FAT (Fairness, Accountability, Transparency) in machine learning, Arvind Narayanan's 21 definitions of fairness and the underlying ethics coded in them, and look at largescale studies of the effects of bias in search engines (Google's effect on election results), news recommendation systems (Facebook's news feed and other experiments), and agenda shaping (studies by Gary King). ACM even has a policy on algorithmic transparency and accountability. A motivating read on how technology can erode democracy and liberty is Yuval Harari's widely cited article.
  • System design: While UI and algorithm design are centered within technology itself, and can reinforce each other, concerns also arise on the appropriateness of the design of the solution to address specific problems. We will use a case-study of the Aadhaar system to differentiate between empowering and disempowering designs of technology to solve the problem of transparency and accountability in access to entitlements of the poor. Whether the centralized and technology-driven and self-service methodology of Aadhaar was a suitable solution to the problem, or a decentralized and technology+people driven and community centered methodology of using SMS alerts and collectivization could have been a better solution? Jean Dreze's Sense and Solidarity discusses this well, and Reetika Khera highlights the impact of Aadhaar on welfare programmes. Systems thinking may provide a possible framework to designing appropriate solutions, and we will look at Donella Meadows book, Thinking in Systems, to understand the power of creating appropriate information flows through technology to solve many problems. Examples from research by Dipanjan Chakraborty from the ACT4D group on appropriate system design for citizen-government engagement builds upon these ideas. Langdon Winner's article do artifacts have politics also discusses the importance of system design in shaping the political outcomes that arise from technology.
  • Socio-technological interface design: Finally, even after the system design and technology design might be suitable, concerns arise on how the systems interface with society and issues that arise due to existing inequalities in capability for technology use, literacy, and community ownership. We will read chapters from Toyama's Geek Heresy which emphasizes on the need to understand the motivations of the users, and Aaditeshwar Seth's working paper/essay on creating suitable processes to manage the complexity of using technology for development which discusses about the importance of design to include the users in shaping the norms of how the technology gets used. Similar challenges that arise when designing technology for others, and the need for constant iteration in handling ethical concerns of technology, indeed shaped our own action-research based methodology of constant iteration to discover processes to manage the socio-technological interface. A given technology clearly needs to be analyzed on many fronts -- its position in the broader system, its interface with different societal elements, and its technological design -- and the principles outlined here could be useful to not repeat our mistakes with new technologies that are being built.
  • Key take-away: We want the students to walk away with an understanding that they cannot take the outcomes arising from technology for granted and that they cannot assume a God-like know-it-all attitude when developing technology.


Lecture notes: User interface, privacy, data and algorithms, system design, deployment management

Additional notes - student presentations: Differential privacy and k-anonymity, Private communication, Explainability in machine learning, Handling imbalanced datasets, Fairness in machine learning, e-governance projects - NREGA MIS, Bhoomi, Aadhaar, GDPR, Examples from Weapons of Math Destruction, Managing moderation on social media platforms, Fake news industry and application search optimization, Gig economy


3. The need to take responsibility

  • The language of rights, power, development: We will take a look at books like The Ethics of Development, by Des Gasper and Pedagogy of the Oppressed, by Paulo Freire, to understand the need to take responsibility for the outcomes arising from our actions. At the same time, we will also discuss why we need to be particularly conscious of the implications of our actions on the most vulnerable, and how we can involve them in the conversation.
  • The purpose of building technology: We will read Tim Unwin's arguments on why new technology should be designed for the poor and marginalized as its primary users, because if not then the existing inequalities are only reproduced and the "gap" gets wider and wider to effectively never get bridged. Historically, the state and market have significantly shaped what technology gets built and marketed, and who the intended user is, why then for once cannot we build technology with an exclusive focus on the poor and marginalized?
  • Key take-away: We want the students to understand that even though they may now be more aware of the faultlines where ethical issues arise with technology, handling these faultlines will require them to be more responsible and conscious especially about the weak.


Lecture notes: Taking responsibility


4. Taking a broader view of technology and human society -- who drives what, the role of media and capital, countering hegemony

  • Technology determinism or the social shaping of technology: Whether technology shapes the contours of society, or society develops and uses technology based on its own agenda, has been a long-standing debate. We will read pages from Donald Mackenzie and Judy Wajcman's book The Social Shaping of Technology, and Merritt Row Smith and Leo Marx's book Does Technology Drive History? The Dilemma of Technology Determinism, on how new inventions take shape.
  • Social control of technology: Can people control what the technology gets used for and regulate its use to prevent unintended harm? We will read about the Collingridge delimma, and some short essays like Norman Balabanian's Controlling Technology: Should we Rely on the Marketplace. We will also examine Steven Pinker's argument about ethicists slowing down progress, and ensuing debates about the need for ethical examination of the methods.
  • Structures to control technology development: From a Marxist perspective, can the computer science graduates we are producing, ensure that their labour gets put to good use? We will look at the Lucas Plan and the book by Mike Cooley, Architect or Bee: The Human Price of Technology, when the workers took control of what technology they wanted built using their skills. We will look at methods of co-determination like how Germany ensures that worker representatives sit on the boards of its companies. We will try to analyze some offshoots from Marx like Braverman's work on whether today's white collar workers have more agency than blue collar workers, and CW Mills and Lederer's work on how the political ideologies of these two groups of workers have potentially diverged, to discuss whether indeed the skilled and creative engineers can change the IT industry from within.
  • Political economy behind the rise of technology: We will then discuss what forces might be behind this hypercharged technology driven culture of today, of assuming that more technology will always lead to significant benefits for society. This seems to be something that investors, governments, entrepreneurs, all seem to be pushing, and the media seems to be carrying the messages prominently. We will look at James Scott's book Seeing Like a State: How Certain Schemes to Improve the Social Condition Have Failed to understand what might be behind the government's excitement of technology driven development, CW Mills classic The Power Elite, and some recent work by Anirban Sen from the ACT4D group on understanding corporate-government interlocks, the political economy behind ICTD policies, and media bias itself. We will also discuss companies like Uber and Amazon which on the one hand aim at democratizing commerce through technology, but on the other hand also face grave internal conflict of wages and exploitation among their workers.
  • The challenge of business models: Capitalism lines up the incentives for new technologies to get built and scaled rapidly, but is also corrupted by the centralization of capital and the lack of a moral guiding compass. Is there no alternate social system that can produce the same pace of technological development but aimed at solving issues of the poor and marginalized? Even capitalism for that matter might be slowing down in its innovation capability. Free and opensource development seems like an interesting phenomenon but is being co-opted into the regular capitalist system. The small is beautiful vision of appropriate technology has not been realized. Impact investing which was intended to be agnostic to any of the above concerns, has remained merely a capitalist financial investment vehicle unable to aim itself at producing meaningful impact. Even new models of a socialist economy have been proposed. The answer could lie anywhere, but what is certain is that change will not happen on its own, and will need to be either demanded by the skilled workers of today to ensure that their companies operate towards certain ethical goals, or demanded by the people living in liberal democracies to exercise ethical choices, or both.
  • A feminist approach to technology: Some people argue that the removal/withdrawl of women from technology may indeed be the cause for a lack of attention being paid to ethical concerns. We will look at Joan Rothschild's 1981 paper on a feminist perspective on technology and the future, and discuss why even after so many years these ideas are still not mainstream. We will then look at Helen Longino's views questions whether there can be a feminist science.
  • Key take-away: We want the students to understand that our views about the superiority of technology led change might have been shaped by forces that have very specific agendas and interests. Bringing positive change in the world might very well be in their own hands, the system can be changed, it is the way it is because it was designed this way, but it need not continue as the same.


Lecture notes: Design methods, organizational structure and collective action, political economy, technology and society (TBD)

Additional notes - student presentations: Design methods, Participation in algorithm design, Technology for collectives, Social audits, Legibility enhancing technologies, Platform cooperatives, Media effects and democracy, Surreptitious communication


5. Other topics on technology and society: Just a sweeping view to get a big-picture understanding, if time permits

  • Technology and jobs: We will discuss the raging debate these days of whether or not Artificial Intelligence will remove jobs and increase unemployment, and if it does then what could be ways to deal with it -- concepts like universal basic income, different kinds of tax subsidy models, the importance of work in general for humans, and whether solutions like UBI meant to address unemployment fallouts from technology can themselves become tools of control and coercion through the use of the same technology.
  • Technology and globalization: We will discuss how Information Technology has accelerated globalization and the mobility of capital, and touch upon the tensions within globalization itself.
  • Big problems of today: My PhD adviser suggested that we should just read the newspapers to identify important problems to solve. We will look at work by Amartya Sen and Jean Dreze, Thomas Piketty, and others, to understand some of the key challenges in the lives of the poor. A brief overview is available as part of a cheatsheet of development problems in India, developed at Gram Vaani. Richard Heeks also has a fantastic list of problems for ICT4D.
  • Key take-away: We want the students to understand that there is a much wider worldview to have than just science and engineering textbooks


Lecture notes: TBD

Additional notes:



Blogs on student projects:

During the Jan-May 2020 semester, students taking the course did several exciting projects and a few wrote short blogs about their projects. A key question that troubled us throughout the course was whether companies can really be interested in being more ethical. Aditya Kumar looked at public interviews of CEOs and employees of many companies, to understand what factors in a company's journey may push it towards being more or less ethical - like, the maturity of the company, or vulnerability to market risk, or whether ethics itself can be marketed to consumers as a value. These insights can come in handy to understand which companies may be suitably positioned to become more ethical in their work, and Saurabh Jain looked at the space of matrimonial websites in India to ask whether these platforms could be interested in taking up normative goals like standing up against caste or complexion based discrimination, domestic violence, dowry, etc. Pranav Bhagat and Jay Modi similarly looked at the food delivery platform, Zomato, to ask if Zomato could be interested in nudging its users towards healthy eating practices.

In a different direction, Kishore Yadav and Om Prakash looked at Uber's policies in India and how the drivers' concerns and goals in India are different from drivers' experiences in the western markets. Aditya Chhabra looked at another exciting space of apps used in residential gated communities to keep track of visitors, and asked whether the residents, security guards, society presidents, and software system architects, are concerned about peoples' privacy or not. These wider concerns precisely explain why strategies like ensuring ethics by design are not sufficient in themselves, and Saurabh Jain has discussed this in detail.

Ultimately, in response to pressures by consumers and regulators, companies that may not have been satisfactorily toeing the line of ethics, may be forced to do so, and Ankit Kumar Singh attempted to find out how Facebook builds its community standards and socializes them among its own employees. All the projects overall looked at ethics from multiple vantage points - as consumers and users about their experiences and preferences, as providers and employees about their awareness and priorities, and of the potential to bring both consumer voices and provider responses in line with certain normative directions that society should ideally follow. While these lead to more questions with no obvious answers, our hope is that by exploring the possibilities we will find appropriate paths for the way forward. If we don't ask then we won't know how to improve.



Putting together this outline and reading list has benefited significantly from feedback by Tim Unwin (Royal Holloway), Andrew Dearden (Sheffield Hallam), Melissa Ho (Cape Town), Robin Cohen (Waterloo), Balaji Parthasarathy (IIIT Bangalore), Bill Thies (Microsoft Research), Kentaro Toyama (Michigan), Subhashis Banerjee (IIT Delhi), Reetika Khera (IIM Ahmedabad), and Sumeet Agarwal (IIT Delhi).