Over the past 24 hours, I’ve been following with rapt attention the conversation that has emerged after Tristan Harris, a heralded ex-Google employee now dubbed “the conscience of Silicon Valley,” gave an hour-long pitch for “humane tech.” His pitch yesterday was not a dramatic pivot from the work he has been doing with his Center for Humane Tech for the past couple of years, but the media attention around the talk, like the release of a Wired profile, seemed to suggest the ushering in of a manifesto.
So, I tuned into the broadcast recording of the talk, called “A New Agenda for Tech,” a talk clearly aimed at investors who may want to contribute to Harris’s new efforts to make “conscientious” tech products. What Harris calls for is a vision to “reverse human downgrading by inspiring a new race to the top.”
The concept of “human downgrading” is uncomfortably off-putting to me. The phrase “downgrading,” of course, is a spin on the idea of technology “upgrading” year after year, but the idea that humanity has “downgraded” itself through technology oddly defers the agency of the humans involved. All of the “blame,” so to speak, goes to the technology itself, and seems to suggest that just as engineers offer upgrades to improve software, they’ve also “downgraded” the consumers of the technology in turn. The engineers, in this formulation, have virtually all the power to shift and shape humanity’s responses.
It may very well be true that engineers do have tremendous power in an information economy powered by networked devices. But to assume that engineers have an influence that impacts human evolution to an extent that we are intellectually and socially “downgrading?” This kind of rhetoric deliberately dis-empowers and undermines the acts of consumer resistance which are much more likely going to drive change and innovation than a group of engineers will.
Educational contexts are ones that I’ve been following most closely and I can only help but think about MOOCs as a perfect example of this. Many thinkers claimed that MOOCs could revolutionize learning, but what happened? Students pushed back. The model, fundamentally, didn’t work for the populations that engineers thought could be best served by them. It’s not that MOOCs are dead entirely (they still work pretty well as professional learning solutions rather than substitutions for liberal arts education), but acts of consumer resistance (or even just consumer disinterest) are basically what ultimately contributed to MOOCs’ demise. That is, technology can and does change, and it is going to be much more responsive to the needs, interests, protests, and questions of those who the tech serves than by the designers of the tech itself. (Side note: I’m very excited to read the book that Steven Krause is writing about MOOCs coming out this fall)
Again, I don’t mean to suggest that engineers can’t make tremendous humanistic change. Many do. What I quibble with is an approach to tackling technology concerns from the perspectives of outside technical forces shifting, shaping, and influencing human behavior in a way that is beyond people’s controls and in a way that centers and aggregates power in one place: among designers and developers.
Wouldn’t we be better served if we were concerned with less about how we improve designs and products and more with how we improve and diversify the designers themselves? Maybe it’s not humane tech we need. It’s humane technologists.
Mutale Nkonde, a fellow with the Data & Society Institute, wrote a great piece about the potential for people to become Critical Public Interest Technologies, who could (in her words) “be critical of the asymmetrical power systems that lead to the weaponization of technological systems against vulnerable communities.” This kind of call is more of what we need.
To that end, here’s the other part of Harris’s talk that I couldn’t quite get past: that the greatest human crisis today is a crisis of attention.
I just don’t buy it.
Are people distracted by information in digital environments? Of course. As books like Natasha Schull’s Addiction by Design and Adam Alter’s Irresistible attest, it is obvious that much of our software is designed “to keep us engaged” through ongoing notifications and calls to quantify ourselves. It’s not that distraction is an imaginary problem, but rather it’s that distraction is deeply misunderstood and easily misidentified. Indeed, what looks like distraction to someone on the outside may not be distraction to the person engaging in a particular behavior.
Distraction is also seen as a barrier to production, a way of reducing or eliminating the potential of human labor. That’s true, but why should production always necessarily be what we mourn most? Why worry about production in a digital age when, in environments becoming increasingly surveilled and manipulated, we could worry instead about our safety? Why worry about production in a digital age when so many don’t feel safe enough to produce anything for fear of being uncomfortably outed, identified, and tracked?
The worry that we can’t produce enough because we are so distracted in digital environments assumes that we have equal capacity and will to produce and contribute in the first place. We don’t produce equally because we don’t live in an equitable world. Tech simply amplifies those inequities.
What it comes back to in higher education contexts (and perhaps contexts beyond that) is an awareness not of the potential for “distraction” itself, but for the motivation that students have for engaging in learning in the first place and what instructors and facilitators can do to spark that motivation. Beckie Supiano of The Chronicle of Higher Education does a great job of establishing the conversation around “digital distraction” in higher education settings, voicing the concerns and the potential solutions.
One solution becomes very clear: let’s listen to our students, hear their concerns, and be responsive to them. It sounds simple, but it’s scary because it takes the conversation out of the control of facilitators for learning.
But it’s precisely that control we need to give up in order to be responsive to and engaged with how the choices we make in the spaces we provide impact our students’ abilities to learn within them. I’m still learning how to do this myself. I don’t have answers yet, but I’m working on them (via a book I currently have under contract with West Virginia University Press, which is focused more specifically on how we engage in deep, thoughtful, sustained reading in digital environments).
In contexts outside of higher ed, we might do the same thing: investigate what motivates us to participate in online spaces in the first place and to consider how our motivations contribute to the environments that we create.
Jenae, I don’t think I actually met you, but I stopped by your poster at ELI this past year. Thanks for this post. I am still chewing on a lot of what you said and referenced and I appreciate your clarity around how we think and act on “digital distraction” we encounter as educators, with our students, and on our own. Indeed, listening to our students is a fantastic and meaningful step forward. I am excited to hear about the book you are working on. I’ve been looking for something that speaks specifically to how we read in this new digital era. Looking forward to reading it! Ryan
Thank you so much for this thoughtful comment, Ryan! I agree that there’s a lot to unpack here; this blog post is probably just a beginning on what need to be more sustained thoughts on how we help our students manage and understanding their changing workflows and environments.
Thanks for the enthusiasm about the book too! It’ll still be some time before it’s out, but I’m glad to know that there are folks interested out there!
🙂