ICTC’s Tech & Human Rights Series

“No Application Without Representation”

A Conversation with David Ryan Polgar

ICTC-CTIC
ICTC-CTIC
Published in
14 min readJul 22, 2020

--

Original interview took place March 19th, 2020

As part of ICTC’s Technology and Human Rights Series, ICTC spoke with tech ethicist and digital citizenship expert David Ryan Polgar. David has brought to light some of the hotly debated issues that exist at the intersections between social media, privacy, ethical design, and digital wellbeing. He has helped define what it means to be human in the digital age. An attorney and former educator, David is the founder of All Tech Is Human, an accelerator for tech consideration and hub for the responsible tech community. David is a three-time TEDx speaker and has been featured by CBS This Morning, BBC World News, the Today show, Fast Company, USA Today, AP, LA Times, and The Guardian. David serves on TikTok’s Content Advisory Council, is an advisory board member for the Technology & Adolescent Mental Wellness program, and is involved in many other related responsible tech initiatives. Kiera Schuller, Research and Policy analyst with ICTC, interviewed David about strategies for tackling tech ethics, the implications of the data revolution, and how to be an engaged “digital citizen.”

Photo by ROBIN WORRALL on Unsplash

Kiera: Thank you so much for joining me, David. It’s a pleasure to speak with you today. You are widely known as a leading “Tech Ethicist” and “Digital Citizenship Expert.” For our audiences, can you explain what these terms mean and briefly about the work that you do?

David: Absolutely. In my work, I focus on the impacts that social media and technology are having on us from ethical, legal, and emotional perspectives. My background is as an attorney and an educator but, around 2012, I began to see a need for more thoughtfulness around the development and deployment of technology. At that time, there were many people in the “gadget guy space,” but not many discussing technology’s actual impacts. From my perspective, the smartphone that we were all walking around with was rapidly changing how we live, love, learn, and even die — fundamentally altering the human condition, which is a big deal. Today, we’re coming to grips with just how big of a deal it is and the seriousness with which we need to tackle these issues. But we still need to greatly diversify the thinking process that we approach these issues with.

“…though we frame these issues as ‘tech’ issues, they are really societal issues. We might have a tech problem, but what we need is a societal solution.”

The role of a “tech ethicist” is relatively new, but that’s also true for many other terms that we frequently use today, like “data scientist.” In the coming years, I expect to see a significant increase in the number of people focused on tech ethics because, as we’ve seen with scandals in recent years, there is a pressing need to better understand the consequences of technology — including the unintended ones. Since entering the field in 2012, and in recent years, I’ve had more and more people reach out to me or All Tech Is Human (the organization that I run, which operates as an accelerator and hub for the responsible tech movement) to ask, “How do I get involved, and how do I get the necessary insight and training?” Already recently, there has been a new push in education toward advancing the role of the “tech ethicist.” They use different terms for it, but essentially a major education company is asking, “How do we think about this as a career path?”

That said, with tech ethics, we should never assume that one person can provide all the answers; if we rely on one person it puts us in an extremely vulnerable position as a society. In addition to ensuring that technologists, entrepreneurs, and company leaders and executives are more ethical in their considerations, we also need to expand the process of how we interrogate these systems. Most broadly, though we frame these issues as “tech” issues, they are really societal issues. We might have a “tech” problem, but what we need is a societal solution. If you look at AI, which touches upon human rights, you most certainly don’t want to leave it to just one type of discipline to create the solution; you want multi-disciplinary action. This is a time where we need poets, philosophers, ecologists, attorneys, and everyone because there are so many aspects to the way that technology impacts us.

Kiera: That arguably speaks to the age-old division between STEM and the humanities, which seems to be an even more false division today than ever before.

David: Absolutely. If one thing is very clear, it’s that we always need to think about the human side of technology. For example, just a few years ago, Facebook had to create a “Compassion Team” made of philosophers and psychologists after a father whose daughter had just died was seeing photos of her on his social network. The social network was just following its algorithms in showing the photos, but in reality, it was dealing with real individuals who have emotions, and the photos deeply affected the father’s grieving process. There’s a book by Scott Hartley called The Fuzzy and the Techie: Why Liberal Arts Will Rule the Digital World which makes exactly this case: why we need transdisciplinary thinking. I think we’ve also seen a push in academia. Recently, I’ve seen many recent college graduates and students who really seem to get it. Many of them are joint majors in philosophy and computer science. They make me very optimistic because that’s the type of thinking we need.

Kiera: One of your central topics is digital citizenship. You co-founded the global Digital Citizenship Summit, held at Twitter HQ in Oct 2016, and have a class on digital citizenship for adults, filmed with Skillshare. How do you define a digital citizen? What does being a digital citizen entail, and why is this concept important?

David: The way I like to define digital citizenship is “the safe, savvy, and ethical use of social media and technology.” The concept has been around for nearly 10 years but has been more popular in the K-12 space among teen and younger audiences, particularly in the US. Lately, however, organizations like Common Sense Media have started sharing the concept with older age groups; and colleges and universities have started asking, “What kind of digital citizenship training do we have for college students, or even adults?” Digital citizenship transitions us away from viewing people as users to viewing them as citizens. Right now, I’m sitting in the US, and I am considered a citizen of the US as well as a resident of a state and a city. Each of these roles comes with certain rights, responsibilities, and obligations.

“How do we incorporate the voice of the people in decisions around technology? The way I like to say it is, ‘No application without representation.’”

We tend to believe, for example, that a citizens’ civic duty is to vote, be engaged, and be knowledgeable about public issues. As a citizen, this takes me away from a hyper-individualistic stance where I’m “just an individual,” to realizing that I’m also a community member. I think this is the future for digital citizenship. If you look at trends related to how we’re thinking about social media, you’ll see that people are realizing that even though we’re citizens of an actual country, we’re also participants in online platforms and communities, where we must deal with terms of service and community guidelines. We’ve really struggled to legally define what a platform or social media company is, but when you look at some of the lawsuits and actions that are happening, you see that people are starting to think of these platforms as a kind of country, of which we are citizens.

Digital citizenship matters because online safety has many contributing factors, which involve individuals, businesses, policymakers, and the media. We need to have more socially responsible companies, which usually occurs through media oversight and public education. We need to have more engaged policymakers that make smart regulatory decisions. But we also need more educated and engaged citizens who think of themselves as digital citizens, rather than users. All of these parts interplay. Responsibility doesn’t lie solely with tech companies the individual, or the government; it’s all of the above. This is why I believe we’re headed toward a politics of technology. These issues are not just technical issues, they are much more complicated. We must consider not just the technology, but also the education, policymaking and participation behind it, or what I call the entire “tech process.”

Kiera: A lot of people today talk about the notion of “global citizenship” because the internet doesn’t have the traditional national borders. But at the same time, these issues raise the question of how to bring everyone to the table to have the same vision. How do you make sure that everyone agrees to the same future and same regulations?

David: I think that’s going to be the great challenge of the next couple of years. How do we incorporate the voice of the people in decisions around technology? The way I like to say it is, “No application without representation.” The companies that we’re discussing affect my real life. They affect the jobs I see, the people I communicate with, how I communicate, and the reality I see. That’s a big deal. It affects my livelihood and my trajectory in life, so we really want to ensure that it is done correctly. In a democracy, we should all want a voice at the table. Looking at the “techlash” that we’ve seen in recent years, I think a lot of the backlash stems from the vulnerability that people feel toward technology that is being created and is completely out of their control. We can enjoy and love technology but also want to ensure that it’s aligned with our values. Again, this isn’t a tech question but a question of power and equity.

Kiera: Another central theme of your work is tech ethics. This includes issues ranging from the right to be forgotten against society’s value in remembering, to the fine lines between free speech and troll behaviour, to the ethical implications of artificial intelligence and post-death communication. What are the current ethical tech questions that interest you most and why?

David: One ethical issue that I’m really interested in is the modification of human communication. It’s typically overlooked, but it deals with a lot of these fundamental issues about what an individual might want versus what a business might want. The reason why modifying speech matters is that we’re in a very delicate position. Right now, with the web, there are so many people to talk to and an overwhelming number of people you can connect with. With this comes a massive pressure for people to communicate very quickly and modify speech for online platforms. For example, if I have a job anniversary on LinkedIn, you might receive a notification that says, “Click here to say, ‘Congrats on the new job.’” But what’s tricky about this is that communication tends to be very reciprocal. As a communicator, I need to know how much time and effort you put in, for me to assess our relationship, how important it is to you, how much you value it, and how much I should value it. If our communication is increasingly becoming thoughtless, then does it count? When we blur the lines between what’s real and what’s automated, it can become really uncomfortable.

Kiera: This may be a simplification, but it seems there tends to be an inherent tension between the business interests of Silicon Valley (attention, engagement) and the human interests of users (trustworthy information, happiness). In 2017, there was a backlash against Silicon Valley for their potential connection with misinformation and tech addiction. Do you think the two sides can be reconciled? Are we any closer to improving the impacts that social media have on user wellbeing and society at large?

David: Tensions and concerns related to technology are often tensions between what is good for the individual and/or democracy and what is good for business. This tension is something that many of us have a tough time dealing with. One of the main struggles of the coming years will be figuring out how to better align our business models with what is good for society. People in tech will be put in a very difficult position if they try to push businesses to do something financially against their interests because tech is a hyper-competitive space, with so many companies. A better approach is, if you don’t like the way people are playing the game, you change the rules of the road. And that is why I think smart regulation is going to be an important course of action.

“New kinds of digital technology pose challenges to our current conceptions of rights. They will force us to reconsider our ideas of victimhood, power, and questions around incentivizing problematic behaviour.”

The tech ethics conversation is a key part of this. Many people ask, “What do ‘tech ethics’ really mean? What is the impact of these conversations? Is this just naval gazing?” In fact, the impact is massive. The conversations that we’re having around tech ethics now are the canary in the coal mine of what is going to be illegal in the future. Very often, when we talk about something that is unethical, it is made illegal in the future. A quick example would be revenge pornography in the US. Up until a few years ago, most states didn’t have laws against revenge pornography. It was considered unethical, but there were no laws against it. However, that changed. So having these conversations today will help influence policy tomorrow.

Kiera: This is a slight switch of gears, but another aspect of digital citizenship you engage with relates to our health. You use the term “mental obesity” to describe the information overload we face as knowledge workers in the information age. Can you explain the concept of a “knowledge worker,” the “information age,” and the “mental obesity” problem?

David: The way I think about it is that we need to move away from the concept of screen time. I began thinking about this issue early on in 2012 and 2013, about the fact that all information is not created equal. I started using the term “mental obesity” because the food analogy is useful. Over time, food shifted from being very finite to seemingly infinite, which led to the rise of the diet and exercise industry. The industry asked, “Okay, if we have infinite amounts [of food], how do we better control our intake? How do we balance food consumption with exercise?” Similarly, information has gone from being finite to being almost infinite for most people. Like the diet and exercise industry, we must ask, “How do we ensure the benefits of information but without the harms?” This is why I’ve turned to the idea of “mental obesity,” and why I believe the concept of “screen time” is nonsense today. We should care about the qualitative impact of information we consume as much as the quantitative amount.

Photo by Daria Nepriakhina on Unsplash

Kiera: That ties back into our earlier discussion of citizenship because the information that you consume shapes how you behave as a citizen: how you vote, what issues you see, and how you see them.

David: Yes exactly. Digital wellness tends to be a part of digital citizenship. Digital citizenship is not only focused on being savvy with media literacy and consumption but about thinking holistically, which includes digital wellness. This holism is essential because if there’s one thing we’ve learned, it’s that there’s no magical solution, no magic button to stop online hate speech or misinformation, for example. Rather, the more you increase digital citizenship, promote social responsibility among companies, and encourage citizens to reflect on what they do and don’t share online, the more you reduce the consumption of misinformation.

Kiera: Turning toward human rights, what do you think will be the biggest surprises in the realm of technology and human rights going forward?

David: I think that one of the main questions related to human rights will be how we treat our digital devices. We have begun to humanize many of our devices, and with that comes very uncomfortable questions about what is and isn’t considered appropriate behaviour. Consider Amazon’s Alexa; there is a trend whereby we tend to call digital assistants by female names. Why are we using female names? What impact does that have? Are we perpetuating stereotypes? There is a connotation there — an assistant is something that we boss around. Beyond this, if we think of Alexa as a real person, should we also treat Alexa more like a real person? This has come up in children’s’ interactions with Alexa. Should a child say “please” and “thank you” to Alexa? Many parents who want to instill politeness and manners into their children have argued that it should be necessary to say “please” and “thank you.” If a child views Alexa as a real person and is learning how to communicate through it, then it should be treated like a real individual. Ultimately, there will be a need to much better understand how our human-to-bot relationships affect our human-to-human relationships. Imagine a scenario, like a sci-fi movie, where you create a human-looking bot and begin to treat it as another family member. What happens if a person abuses the bot? Does that count as abuse? If you say, “No, it’s a machine,” this treats it like chattel, not as a person. We could potentially develop protections similarly to how we’ve increased protections around pets, but that doesn’t address the question that’s still unanswered: if we allow people to have these interactions with their digital devices, how does that influence their interactions with other humans? That is certainly an area that will come up and which I think is going to be very important to focus on.

Kiera: In many ways, we’ve only developed these robust concepts of “human rights” relatively recently for ourselves, so how do we apply these to robots, and what does that do to our ideas of citizenship and having responsibilities and duties under the law? These are huge questions.

David: That is where I think it’s going to be a challenge. New kinds of digital technology pose challenges to our current conceptions of rights. They will force us to reconsider our ideas of victimhood, power, and questions around incentivizing problematic behaviour. Once again it comes down to the very tough issue of determining what might incentivize negative behaviour, and how that human-to-bot or human-to-avatar relationship might impact human-to-human relations. The research is still very much in the works of trying to answer that. Another thing that ties into human rights is the issue of synthetics. I think we are going to see greater potential action and struggle in the coming years in this area because we are creating devices and abilities that enable a person to put on a “new” body. If harassment or something happens, then the person can argue that they are being violated, even if it’s not their real body. It’s still perpetuating certain negativity toward them, which might lead to harassment, things like that. So there will be a lot of challenges for the human rights community, and many questions around law, boundaries, and definitions of concepts. We have exciting years ahead.

Kiera: Thank you so much for joining me, David! It was an absolute pleasure to speak with you.

David Ryan Polgar: A pioneering tech ethicist who paved the way for the hotly debated issues around social media, tech ethics, unintended consequences, digital wellbeing, and what it means to be human in the digital age. With a background as an attorney and educator, he has been a leader in the Responsible Tech movement since 2012. David has appeared on CBS This Morning, TODAY show, BBC World News, Fast Company, SiriusXM, Associated Press, Washington Post, LA Times, USA Today, and many others. An international speaker with rare insight into how we can build a better future with technology, he has been on stage at Harvard Business School, Princeton University, The School of the New York Times, TechChill (Latvia), The Next Web (Netherlands), FutureNow (Slovakia), and the Future Health Summit (Ireland). David is the founder of All Tech Is Human, an accelerator for tech consideration & hub for the Responsible Tech movement. All Tech Is Human speeds up the process of progress by uniting multiple stakeholders, promoting knowledge-sharing and collaboration, and developing a much-needed hub. David is also the co-host of Funny as Tech, a podcast about our messy relationship with technology. David is a frequent consultant and tech commentator, co-host of the podcast Funny as Tech, and an advisor for Hack Mental Health, the Technology and Adolescent Mental Wellness (TAM) program, and #ICANHELP — all committed to using tech for good. He was recently appointed as a founding member of TikTok’s Content Advisory Council.
Kiera Schuller, Research & Policy Analyst (ICTC), holds a background in human rights, international law, and global governance. Kiera launched ICTC’s new Human Rights Series in 2020 to explore the ethical and human rights implications of emerging technologies such as AI and robotics on rights, equality, privacy, freedom of expression, and non-discrimination.

ICTC’s Tech & Human Rights Series:

Our Tech & Human Rights Series dives into the intersections between emerging technologies, social impacts, and human rights. In this series, ICTC speaks with a range of experts about the implications of new technologies such as AI on a variety of issues like equality, privacy, and rights to freedom of expression, whether positive, neutral, or negative. This series also particularly looks to explore questions of governance, participation, and various uses of technology for social good.

--

--

ICTC-CTIC
ICTC-CTIC

Information and Communications Technology Council (ICTC) - Conseil des technologies de l’information et des communications (CTIC)