In this interview, we sit down with Sayash Kapoor, a Ph.D. candidate at Princeton University’s Center for Information Technology Policy. After graduating from IIT Kanpur with a BTech in Computer Science in 2019, Sayash’s work in machine learning methods and their applications in science has earned recognition from publications like WIRED, LA Times, and Nature. Recently named among TIME’s 100 most influential people in AI, alongside his Ph.D. supervisor, Dr. Arvind Narayanan, Sayash shares insights into their academic journey and the evolving landscape of technology.

 

Let’s start this interview with a trip down memory lane and talk about your early campus stories. What are your most memorable moments from your campus life?

I got involved with ELS because they host Freshers Nite, and I was an anchor for the Y15 Freshers. It was an enjoyable group.  However, I was much more involved with DebSoc when it came to how much time I spent there. When I joined IITK, there was no such thing as a DebSoc, and hence, no people who used to debate. Arnav Mehta, a Y14, led the effort by speaking to first-year Y15 students in humanities courses and asking them to create DebSoc. In my first year, we went to a few tournaments, and in my second year, we converted DebSoc from an interest group to a Club. I became one of the Coordinators. In my third year, I went to an exchange program at EPFL. When I came back in my fourth year, we became one of the first teams to “break” in a debating tournament from IITK (breaking is the equivalent of moving into the knockout stage following a league stage). Debating played a significant role in shaping my life at IITK, and going to these tournaments with a group of 20-odd people led to some of my best memories from my campus life.

I also liked singing and playing the guitar(I still do). I did not pursue it formally, but I did participate in Galaxy, and one thing that we used to do after every fest was that a bunch of us would just go around singing songs in the streets. 

 What were the initial projects that got you interested in pursuing your research in AI & Policy?

While I joined as an undergrad, I was fortunate that Prof. Purushottam Kar joined the faculty. He was looking for students, so I worked with him in my first summer and did a UGP under him for the following two semesters. These projects set me up for my research career. Coursework is excellent, and IITK does a great job compared to Princeton or EPFL in laying great foundations, but if you want to do research, you just have to get into it with a researcher. 

Among other things, Puru does theoretical machine learning, and back then, I was interested in theory. So, in my third semester, I started a project with him on how to make machine learning algorithms more robust to insufficient data. How can we ensure the algorithms work as intended if some of your data is corrupted? I started working on this in my third semester, and we published it by my third or fourth year. That set me up for the rest of my research career. Computer Science PhD admissions are pretty competitive, especially in machine learning. To pursue a Ph.D. in any of the top schools in the US, there’s an unsaid rule that you have a paper or two already published. The project also helped me get research experience at EPFL, and basically, it kick-started my research career.

 Summer@EPFL and EPFL semEx are extremely sought-after programs that receive many applications from our campus and are famous for being extremely selective. Having studied in the Semex program, could you talk briefly about why you opted for the EPFL exchange program and how it impacted your life?

Before we decided to apply, I had long conversations with one of my closest friends (Kumar Kshitij Patel) about whether we should go for the exchange or stay in Kanpur. In the end, we both applied and studied at EPFL. It came down to the fact that we had yet to study in any country outside of India, and at that moment, all other factors seemed inconsequential. The other thing that helped me out was that we were in this together, which made everything so much easier. In my second year, we only used to think about what we would do in the following semester (or maybe even over just the next two weeks); hence, many of our decisions also boiled down to randomness. At the end of the day, it seemed like it would be an exciting experience, so we did it

 

Your journey into the industry is something many people would really admire. Could you elaborate a bit about how you joined Meta and how was your early experience working in the industry?

I interned at Facebook at the end of my third year and received a PPO following that. I took a corporate intern over a research opportunity because I’d never worked in the industry before and wanted to see what that was like. Also, I’d heard great things about the UK and wanted to experience living there.

The decision to take a break from academia and join the industry was pragmatic. In my fourth year, I worked on a startup, and one of the reasons I accepted the PPO was that if the startup did not work out, then I had someplace to go (and that indeed ended up happening). You can’t do that with a Ph.D. Another reason to consider the PPO is that Ph.D. programs pay you exceptionally poorly, even in the US, and it’s nice to have a little bit of financial cushion to fall back upon. Finally, I just wanted the experience to know the counterfactual, understand what I am rejecting, and understand both life in academia and the industry.

I joined Facebook with another of my IITK batch mates and close friends, Yash Srivastav. While working at Facebook, we continued to live like students. I am fortunate enough to have good friends accompany me at each step of my journey. We spent little, and Facebook makes it easy to do so, for example, by giving you all meals in the office, and we made use of those perks quite a bit. As a result, when I returned to academia, I did not feel like there was a significant lifestyle change when I started my Ph.D. here at Princeton

 What were the projects that you worked on during your time at Facebook?

Going into Facebook, I was well aware that I was not aligned with many of Facebook’s incentives.  However, some teams at Facebook were more aligned with my values and joined the self-harm prevention team, where our task was to use machine learning to figure out who was more likely to hurt themselves or attempt suicide and then take real-world action to prevent such a situation, such as by informing suicide helplines and self-help centers. That was the first half of my tenure there, and 6-7 months into my period at Facebook, COVID-19 started, and there was a massive rush of misinformation online about COVID-19. Hence, towards the latter half of my tenure, I led a team to deal with the spread of such misinformation on Facebook and Instagram. That was my last project until I left in December 2020.

Context Note: The General Data Protection Regulation (GDPR) is a European Union regulation on information privacy in the European Union and the European Economic Area. The GDPR’s goals are to enhance individuals’ control and rights over their personal information and transfer thereof. GDPR has significantly elevated the rights of individuals over their personal data, fostering a culture of accountability and transparency among organisationsThis regulation has not only empowered individuals to exercise greater control over their data but has also established a benchmark for privacy legislation on a global scale, shaping ongoing discussions and developments in the realm of data protection.
Source: Link

 

Big Tech often comes in the midst of a lot of controversy especially since AI has come, especially companies like Meta, Google, etc. How would you say that your time at Meta shaped your worldview about AI and policy-making around AI in general?

That’s a great question. So when I joined Meta, they were in the midst of implementing all the changes they needed by GDPR. I think I really did see, to some extent, the impact policy can have on tech companies. So here’s a company that has billions of dollars at its disposal, having to scramble to meet the needs of a single European regulator. In many cases, I’ve seen one regulator in one country in the EU demanding something from Meta, and then the company scrambles for resources to fulfill that request. So, I think regulation can go a long way, even if it can’t solve all the problems created by big tech. For example, one of the issues that we’ve increasingly seen is privacy. I think the EU has a good model for addressing privacy concerns, and we should learn from that. Of course, there are problems with that, too, and we need to learn from these problems and fix them as we go along. I’ve tried to carry that through in my work at Princeton and CITP. We’ve been attempting to engage with the policymakers to see if we can help inform them, especially with so much lobbying and misinformation from tech companies. I would like to think we’ve been somewhat successful, but there’s more work to be done. The other limiting factor, I think, in many cases is the incentives for researchers to do this kind of work because researchers are more often incentivized to publish as much as they can. You know, like write more papers, write more grants, and keep the cycle running, and at some point, researchers should ask: What’s the point of this loop of writing papers, winning grants, writing papers, writing grants, and so on? Is it just a hamster wheel we are stuck on? One of the things I like about my current institute is that policy impact is the real focus. So, we aren’t just incentivized to write papers. We’re incentivized to write papers that have a policy impact down the road. This would not be possible in a typical research institution if I spent 80% of my time thinking about how to write more papers.

 

Many students are often confused about choosing placements over research opportunities or vice-versa because of the factor of uncertainty. Some believe there is a barrier to entry into academia after working in the industry. What are your views on returning to academics after working in the industry for some time?

My somewhat controversial opinion is that everyone who wants to pursue research should work in the industry for a year or two for a few reasons. First, if you have published research work, then by the time you get to academia, that work is better known, and people have had time to cite it. Second, suppose you don’t have prior research publications. In that case, publishing papers is comparable to a lottery. This study was conducted by NeurIPS, where two independent committees reviewed the same set of papers. They disagreed about whether a paper should be accepted about 50% of the time—so paper acceptances are to some extent like a coin toss. Hence, working in the industry for a year or two gives you the time to babysit these submissions, revise the paper, and publish it. This significantly increases your probability of having some research published by the time you apply for grad school.

 

How would you describe the area of interest for your research?

The field that I am interested in is AI and policy. However, since it is such a large field, I’ve been focusing on what AI is and what it can and cannot do for the last couple of years. AI is an umbrella term for a broad spectrum of technologies in the status quo. For example, AI is great at “perception tasks”— like identifying an image or the tone of speech for a particular text. However, it fails when it is used for “prediction tasks,” like predicting whether a person who is out on bail will commit a crime or whether a person will pay back their loan on time. AI is extremely bad at these tasks. Clarifying what AI can and cannot do has been the focus of my research. I’ve looked at many applications of AI, notably where AI has failed. For example, my first research project was reviewing a model that claimed to predict with 99% accuracy whether a region is prone to civil unrest and, ultimately, a civil war. When we dug into these claims, we discovered that this entire branch of political science—known as civil war prediction—was based on errors. When we removed these errors, their machine-learning models could not outperform two decades-old logistic regression. We did not go into this project to disprove the proposed models’ accuracy but rather with optimism for ML and the extent to which these models could be applied. However, through this and several other examples, we discovered that the predictions these models make often tend to be inaccurate, and hence, machine learning should not be used everywhere.

 

And when we talk about delivering this impact through your research. What was your first reaction when you were featured in time? Do you feel that that was a big step in the direction of you making the impact that your research needs to me?

Maybe- I don’t know. I think it’s helpful to be featured in time because your work can reach more people, but it’s also important to remember that it is just a list. Who gets in and who gets out is a highly arbitrary process, and there are many, many people who are doing work that is probably more important than ours who weren’t featured. One thing to note is that who gets featured on lists like these is also a very political decision. It’s a political decision of who is safe enough to put on these lists. So, for example, if someone who is more open or who has ideas that are more radical than ours or who thinks like, you know, basically maybe the entire premise of AI is mistaken and we should basically ban a lot of this technology, they probably wouldn’t be featured in a list like this simply because their opinions are too far outside the Overton window. The list is not just about merit but also about what ideas are amenable to the views that Time as an institution wants to espouse. Time is a very centrist, long-standing institution, right? 

At the same time, it’s great that our work can reach more people — that’s always a pleasure, but I think a lot of the impact on policies is not hard through public conversation. If you want to influence policymakers, you must talk to staff members about what they’re looking for. Again, The Time feature can help because it’ll lead us to be asked for more of these meetings on policy conversations or give us more influence, but eventually, it is just a means to the end. It is not an end in itself—ensuring that policy is just and fair is yet to be done

 A lot of claims, especially in political science and humanities papers, are based on huge datasets. The findings of such studies are then extrapolated on larger umbrella populations, like a whole city or even country without any logical reasoning backing that. Have you encountered this anywhere?

Yeah, yeah, definitely. What you’re describing is called overfitting to the training distribution, where you make predictions without deploying the application in the real world and checking how well it works. While it happens in a lot of cases, I think sometimes it’s fine because the stakes are low. No one reads a lot of these papers, so you can publish what you want, and it won’t have any real-world impact. But this becomes more problematic when it distorts how we think about scientific research or real-world applications. For example, we don’t even need to go into political science or social science. Computer scientists are doing this today. One example is when the GPT 4 paper was released, OpenAI claimed that GPT4 outperforms the 90th percentile lawyer who took the bar exam, and that was based on testing GPT4 on a real-world bar exam and looking at its score. That’s all fair. But the way this was interpreted was that GPT4 can replace lawyers. The issue with that framing is that lawyers aren’t just taking bar exams all day. That’s not their job. So, what the tests are measuring in lawyers is something very different from what tests measure for AI tools. In an AI tool, all it’s measuring is its ability to memorize facts and details and how similar examples are answered in past bar questions. Whereas for humans who cannot memorize, you know, the entire constitution or whatever, these tests measure something else. It’s the ability of humans to respond to these new situations. So, when we conflate these two things, we are misinforming a considerable chunk of the public about what these tools can and cannot do. And I think the widespread perception that AI will soon replace all jobs or most jobs is partly because of these misleading comparisons that companies like to publish.

 You talked about the notion of AI replacing jobs, which is often discussed at IIT Kanpur as well. Do you believe that AI will replace a lot of jobs in the future?

As I just said, our work found that performance on benchmark data sets is not reason enough to believe claims of job displacement since benchmarks are very different from real-world applications. Performance on benchmark data sets is very different from the real-world application. That’s it. On the other hand, I think we’re already seeing some effect on jobs in the real world. For people who translate texts, people who transcribe video calls, and so on, AI is more likely to replace them. We have a transcription tool currently running in this meeting. Such tools have become good in the last few years. And I do think there’s a significant probability that if not replaced entirely,  AI will make it so that there are fewer jobs available. The jobs available are lower paid because instead of transcribing a whole transcript, workers are only asked to edit transcripts that AI has already instead of transcribing an entire transcript produced. In part, I think this transitional phase is extremely problematic because often, in editing those transcripts, you might have to do more work than was required to make the transcripts in the beginning. And so the only reason that you can pay these workers less is because you have more bargaining power in the hands of the employers. So essentially, what AI will have done is not making it easier for us to do some tasks like perfect transcription, but rather it is shifting the power away from workers, who have to negotiate much harder or might not be paid as much, into the hands of the employers who now can say that, “Oh, if you don’t do it for this price, we can just use an AI to do the task even if it doesn’t work as well.” This same phenomenon is happening across many industries. I think copywriters, especially junior copywriters, are seeing these effects. 

At the same time, I think it’s always hard to predict the impact of automation on any industry. For example, there’s this thing called Jevon’s paradox, which says that as some technology becomes cheaper, it can have two effects. It can either automate away some jobs or make some tasks so affordable that it creates more demand for these jobs. So, for example, when automated teller machines were introduced, lots of people thought that bank tellers would be out of jobs because there was essentially no work for them to do anymore. What happened instead was that creating a bank branch initially required lots of money, and one had to hire a whole team of people to service customers and so on. With ATMs, this process has become much cheaper. And so as a result, the number of bank branches just exploded, and ATMs had the opposite effect. ATMs ended up leading to more bank tellers being employed rather than less, and we’re at the start of this new technology as well. I think it’s hard to predict, overall, what the impact on jobs will be. It’s not a linear process where, you know, AI can do one task, which means that a job goes away. I think there are many other dynamics about how the market reacts, what the prices are, how the prices are affected, how much demand there is, and so on.

Context Note: “Coded Bias” is a documentary on Netflix directed by Shalini Kantayya. Released in 2020, the film explores the societal implications of biased algorithms and artificial intelligence systems. It primarily focuses on Joy Buolamwini, a researcher at the MIT Media Lab, who discovers significant racial and gender biases in facial recognition technology. The documentary delves into the broader issues of algorithmic bias and its impact on privacy, civil rights, and social justice.

Source: Link

The paper “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification” was a seminal work by Joy Buolamwini, Timnit Gebru, and others. It was presented at the Conference on Fairness, Accountability, and Transparency (FAT/ML) in 2018 and has been instrumental in shedding light on the biases present in commercial gender classification systems, particularly those based on facial analysis.
Source: Link

 

There is this documentary on Netflix called Coded Bias which talks about algorithm ethics in the context of facial recognition and mortgage decisions by banks. Given the fact of how much corporate influence is over the government, especially in the United States, how do we go about tackling these issues? So, just an example, big tech has a dangerous amount of power which is only going to increase. They already influence many government decisions and try to make them go in their favour. Do you have any views on how we could counteract this influence and move forward as a community and a society to a more fair and just system?

That’s a great question. I think we’ve had some examples of how to deal with this enormous amount of concentrated power in the past. In the US, we’ve also had examples where these big companies were broken up, and there are already proposals in the US about breaking up big tech. For example,  Meta might be split under such proposals into Facebook, Instagram, and WhatsApp. I’m not sure if there is the political will to execute these proposals, but I just wanted to point out that one solution is to break up big tech. Another example is nationalizing these companies. I think this is far from happening in the US, but we’ve done that in India to give historical examples. In the past, we’ve nationalized coal plants and all other sorts of things back in the 1960s and 70s. You do that because you recognize the value these companies bring, but you don’t want them to be purely profit-driven. So, you say we need an Amazon to fulfill people’s needs. We do need to have infrastructure to get people what they want. At the same time, such an enterprise shouldn’t be driven for profit. It should be driven to help people. I don’t think it will likely happen anytime soon, but I’m just saying we’ve had these two examples come up in the past. I think what’s more realistic is the kind of regulation that I was talking about back with the GDPR. For example, the GDPR limits what data the company can and cannot collect. Similarly, in Canada, a recent law was passed that said that if social media companies link out to news websites, they have to share some revenue with them. If adopted more broadly, I think these types of things have some positive impact. The other direction you can take, which is a lot less controversial, is transparency. One of the reasons these companies are dangerous is because we have very little transparency on how they make decisions. For example, who gets to be on these platforms or not? Even in front of AI companies, we don’t know, for instance, how people use Chat-GPT, whether they’re getting incorrect information, whether they’re using it for medical diagnosis or legal diagnosis, how often they’re getting incorrect advice and diagnoses, etc. Therefore, pushing for transparency is one of the things that is easier within our current political environment. I think this is something that at least I’m trying to push for through my research. Then there is the question of what is just and fair, right? I think proposals for transparency are easy. But even in the case of things like the facial recognition example from coded bias, I think their work led to the recognition that facial recognition can be biased and steps need to be taken to make it unbiased. And steps were taken in this direction. A couple of years after this seminal paper called Gender Shades was released, a lot of these companies improved their facial recognition algorithms. But the more fundamental question that this discourse raises is what facial recognition should be used for. Do we really want facial recognition, even if it’s fair and unbiased? Do we want this surveillance technology at all? So, for example, one of the most horrific uses of facial recognition in the last few years has been the tracking of the Uyghur Muslim population in China. And what that means is that you have essentially a surveillance state that’s enabled by this technology. And ironically, in order to be a successful surveillance state, you have to make it unbiased, right? It has to work well on China’s minority population for them to be tracked. So, all that is to say is that unbiased technology is not necessarily the technology that we actually want in the world. So, we need to think a bit more deeply about what is just in this scenario. Maybe the regulations we want are ones that ban facial recognition altogether, and over a dozen US municipalities have banned face recognition tools within them, including some prominent ones like San Francisco, if I’m not mistaken. There has been a lot of focus on fairness and unbiasedness in AI, and I think that’s a good stepping stone towards the deeper discussion that we need to have, which is what is just in all of these cases. I think we have to be open to the possibility that, in a lot of cases, the answer is that we shouldn’t be building these technologies at all. I think that regulation can help by banning certain uses of specific technologies

 

AI is very hyped up in the modern era, like everyone just wants to know everything about it or some people might even want to pretend they know everything about it. If someone wants to start from scratch in this field and really understand the concepts, what would you suggest should be a roadmap for such a person?

If this person is at IITK, I think that’s the perfect place. There are so many experts there. The professors, the graduate students, and the postdocs are all doing excellent research. I think you are privileged because you can work with these professors, reach out to them, take their courses, and so on. And regarding the hype around AI, some of it is also on researchers, but I think the companies’ marketing teams spread a lot of it. Keeping a critical eye on all these claims by companies is important. But like I said, our research found that even researchers fall prey to these things. Pitfalls in machine learning are very easy to make, and it’s very easy to fool yourself that, you know, your model is very good at something when, actually, it isn’t. And so that’s that’s one thing: being critical and engaging with the real experts and not the PR people. Another is to start from the basics in some sense. In recent years, we’ve seen this huge push for foundation models in natural language processing research. Foundation models are huge models that you can usually access through an API, and as a result, you don’t have any experience building these things; you use them as black boxes. GPT-4 is one example. Using these as building blocks can be great if you’re interested in web development or software development. But suppose you do want to work on machine learning and understand it truly. In that case, you have to avoid relying on these fancy pre-made consumer products and really understand how these models work from scratch rather than relying on technology already out there. I should also say that, in some sense, if more academia starts relying on these commercial models, we’re giving away some of the power that academia has in opposing the forces of industry. For example, especially in the US, academia is becoming increasingly reliant on industry funding, industry computing resources, etc. Therefore, you cannot really have a critical eye towards company-funded research. If a company funds you, it’s very hard to call out something wrong that’s happening with their products or counter some misleading claims by that company. As a result, the countervailing force that academia can offer is critical research, and for that, one has to start with a deep understanding of these tools and what they can and cannot do in the first place rather than relying on claims by profit-driven companies, 


Circling around this topic of private funded research, research is increasingly dependent upon funding from private companies and profit-driven corporations. Do you feel this can be combated and should the government spend more on research in public universities, especially in India?

One thing to note is that IITs get a disproportionate amount of the research funding that’s available in India. Something more than 80 or 90 percent of the higher education funding is spent on 1% of the people pursuing higher education. As a result, the extreme resource inequality between IITs and other institutions is one thing to remember when discussing more funding for IITs. I think the trap here is that IITs, or any institution, need funding like that of Princeton and Yale to be successful research institutions. It’s true that more resources do help, but the scarcity or restriction on resources is also an enabler of creativity. Many things that we want and need to build, especially at IIT, need to work in the presence of low-resource regions. So, we don’t want to make advances that work only if you have a cluster of ten thousand Nvidia GPUs. Instead, we want technology that can work at the scale of computer resources that are available to the people of India or the customers of India. This type of research is actually not being pursued by most mainstream labs. And so, I think that this requires a shift in the mindset. Rather than competing with Princeton for the tag of, say, the biggest GPU supercomputer, we should go back and ask what this research is even for. Are we just mindlessly fighting an arms race, in which case we will lose, or are we trying to research to help the community that we want to serve? And who is that community? Is that community an international group of researchers, or is that community something more local? For example, one question that I would like to ask all IITK professors is: how many of you have interacted with the people in Kanpur? How many of you have gone to Nankari? How many of you have any real, meaningful relationships with people living, like, 500 meters from your house? And I think the answer would be that almost none of them have met anyone apart from perhaps the people who work for them or other people who are working at IITK. And so the very real question is that we can complain about the lack of funding that the government is giving at the end of the day to IITK professors and IIT professors in general. Still, they form a highly elite social class in India and Kanpur. Thus, we must ask what this elite social class is doing with this funding, which justifies that this funding should be diverted to them and not anyone else below them in the country’s social hierarchy.

 

I’m quite sure that your work will be read widely on campus after this feature, and there will be a lot of students who finally get the motivation to pursue their dream of research but are afraid of maybe not being able to succeed due to it being competitive or being financially unfeasible. To a student who is studying in the first or second year who wants to pursue this dream, what would you suggest for them?

I think the practical suggestion is just to hound potential professors and try to work on research problems as soon as you can in your career as much as you can if that is what you want. I would also not underestimate the value of having some financial independence. I know that is a massive thing for a lot of people. Academia is often portrayed as this fantastic place where you can do whatever you want. But at the end of the day, you still have to pay your rent and buy groceries, right? So, I think it is completely legitimate to make trade-offs that favor your financial life. I think there is some element of shaming people into pursuing research, like, “Oh, if you’re not doing academia, you’re not pure enough for research”. Some academics are prone to falling into this trap mostly because people like to think that they know best what they’re doing. But I think it’s important to remember that at the end of the day, academia is also just a job. So if it is one that you want, I think that’s great, and you know, pursue research by trying to publish papers and maybe work for a couple of years before joining academia. But if it isn’t something you like, there shouldn’t be any social pressure on people to think of research as “purer” than working in the industry. At the end of the day, both of them are just jobs.

 

After you finish your Ph.D. do you plan to get back into industry? There are a lot of roles at places like Google Brain and OpenAI etc.- research but in a corporate setting. So would that be like where you would like to go eventually? Because in some ways those places, even though they have lesser freedom, they have slightly more impact because it is driven for real-world applications, at least more than what a purely academic setting would do

Yeah, I think that is a hard question to answer. I definitely want to continue doing research. At the same time, a lot of my research has been critical of big tech like OpenAI, Google, and so on. So, I’m not sure if big tech companies can ever afford that level of freedom. So, even if it is research, I think it’ll end up either in academia or like smaller companies outside academia. But of course, you can never predict the future. Every two years, I’ve changed my plan on what I want to do, and I expect that will continue to happen for the time being. So it’s very hard to predict.

 

And that kind of concludes the serious part of our interview. Ending on a fun note, what would your message be to the freshers batch?

I think this will be cliched, but I think IIT Kanpur is generally one of the places where you get to do anything you want. You should talk to people as much as possible, your professors, and your fellow students. Sometimes, you might think that you’re the only one facing a particular issue, but if you talk to others, you’ll find out that is far from the case and that you’re actually all in it together.

Editor – Mutasim Khan
Assistant Editors – Aujasvit Datta,   Zehaan Naik
Design Credits – Mrunmay Suryavanshi
Special Credits – Soumyadeep Dutta

Do you like Vox Populi's articles? Follow on social!