How Do You Solve a Problem Like Solutionism?

Summary

Who doesn’t like to think that all problems should have some kind of solution? While no one is immune to this desire, solving problems can also bring about new ones. Many of us may know this, whether from personal experience or “the lessons of history.” Nonetheless, there remains a tendency to emphasize the positive results that can emerge from “innovations” or “revolutions” in many areas, and with relatively little critique. Certainly within the context of the “Information Age,” this has implications for research and practice in library and information science. Insights from other fields, along with an understanding of the intricate strands of history, can help with figuring out what those implications might be, and perhaps even with considering and addressing them.

What I’d Like to Solve… and Why It Might be a Problem

Nearly a month ago, I wrote a posting on my doctoral research regarding perceptions of musical similarity. Although my actual methodology will entail asking people about this topic, with responses that may or may not pertain to genre, I’m quite keen on the idea of music listeners’ pre-existing musical tastes being used as an entrée for finding music from an unfamiliar genre. The conceit behind this idea is that certain musical and extramusical facets or traits, tempered by variations in one’s emotional state or overall mood, might account for (1) musical tastes that don’t fit traditional genre categories and (2) not liking all music from one’s preferred genres. That, along with my own personal experience, was the initial impetus for my research.

On the other hand, I’ve also considered the concern that such a sophisticated recommender tool might create unintended problems. More specifically, while it could broaden musical tastes laterally, and not in the sense of “cultural uplift,” the hypothetical system could potentially become “totalizing.” (That is, if it were sufficiently scalable.) This was pointed out to me after I gave a presentation related to my topic, wherein a respondent mentioned such a possibility. Basically, while such a system might give people the freedom to explore music in ways that traditional systems do not, it might become too finely-tuned to the more intricate (and even intimate) factors that define our musical tastes. In other words, greater freedom to explore music might also mean less privacy, especially if one has a registered account. A kind of Pandora’s Box, rather than a souped-up Pandora.

Getting with the Programs

Those of us who have haunted academia long enough are aware of the idiosyncratic structures of departments, faculties, programs, schools, and other terms used to identify and consolidate a variety of disciplines. In my case, I’m a Library and Information Science (LIS) doctoral student in the University of Western Ontario’s Faculty of Information and Media Studies (FIMS), which encompasses programs that essentially relate to information and communications. LIS exists alongside Journalism and a variety of programs related to media, including Media Studies (MS).

Essentially, MS examines how various forms of media operate within economic, political, and sociocultural contexts. (And as you might have guessed, the person who mentioned the potentially totalizing nature of the hypothetical recommender system was associated with that program.) To aid with such analyses, MS draws upon a whole host of approaches that fall under the umbrella of “critical theory.” While not comprehensive, below is a brief encapsulation of (1) why you might have shuddered upon seeing the term “critical theory” and (2) what critical theory attempts to do.

Definitions of the scope of critical theory may vary, but essentially its various manifestations critique mainstream power structures and ideologies. Of course, a brief description belies the complexities of critical theory, including the intellectual lineages among various thinkers whose ideas more or less trace back to Marx (with dashes of Freud), as well as the different approaches, agendas, and backgrounds among critical theorists themselves. Admittedly, though, unless one is inclined to study them specifically, such writings can be rather thick to comprehend, and names difficult to keep straight. Since many of the big-name critical theorists tend to come from continental Europe, especially Germany and France, translations provide an additional challenge with which to contend, as well as concepts understood by a narrow intended audience within specific historical contexts. At least to outsiders, some writings can even come perilously close to reading like self-parody, or even glorified tinfoil hattery (except with fancier terminology and without mention of shape-shifters from Alpha Draconis). Nonetheless, critical perspectives are useful for considering how certain mainstream ideologies are embedded (to varying degrees of subtlety, anyway) in the media, ranging from different forms of news and entertainment to the design of information systems.

Although MS is steeped in critical theory, while LIS still tends to shy away from it, some LIS folks work with such perspectives as well, and even teach courses on the topic. An emeritus faculty from my program also co-edited a compilation of fairly accessible essays by other LIS scholars about critical theory, with each one describing how the ideas of various theorists could be applied to LIS. Parts of it, including an introduction that does a better job of explaining things than my brief encapsulation, are available on Google Books.

I also had an opportunity to work last academic year as a teaching assistant (TA) in a first-year course about information and social contexts. This was for the faculty’s Media, Information, and Technoculture (MIT) program, which employs TAs from both LIS and MS. During one of the two terms I TA’d, it was taught by an MS PhD student who took a broad historical approach to information and related media throughout history, from cuneiform to computers. The course was also divided almost evenly between pre- and post-World War II periods. While it didn’t go into great detail about specific big name critical theorists, this is where I first heard (at least to my knowledge) the term “solutionism.” Coined by Evgeny Morozov, solutionism pertains to the primarily market-driven impulse to provide technological solutions to practically any problem, including some that might not be “real” problems at all. After describing a futurist’s hypothetical “smart contact lenses” that could make homeless people invisible to its wearers, Morozov states in his New York Times op-ed piece The Perils of Perfection:

All these efforts to ease the torments of existence might sound like paradise to Silicon Valley. But for the rest of us, they will be hell. They are driven by a pervasive and dangerous ideology that I call “solutionism”: an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are “solvable” with a nice and clean technological solution at our disposal. Thus, forgetting and inconsistency become “problems” simply because we have the tools to get rid of them — and not because we’ve weighed all the philosophical pros and cons.

As pointed out in the course I TA’d, and not much of a surprise given my background in history, none of this is really new. Going back many years, we’ve wanted to find ways to improve our lot in life. Various technologies, including those related to information and communications, have indeed enabled us to do just that. However, especially when various media tout “revolutionary” or “game changing” technology, we all too often focus unquestioningly on the positive outcomes of any new technological advance, and not so much on some of their more concerning aspects. This is certainly true if it suits the ideological stances of interests who already hold power, and who want to keep it that way.

A Brief History of Progress

As my TA’ing experience made me realize more keenly, the dilemmas we face today with regard to information and the emergence of various technologies aren’t really that new. At least within a “Western” context, one can find some affinities with trends that emerged throughout the 19th century. By that time, we had generally placed more faith in ourselves and our observations, rather than in the infinite wisdom of a distant omnipotent superbeing. Of course, religious belief didn’t go away; it just wasn’t quite as central as before. One can easily trace such tendencies back to the Renaissance and Enlightenment. The big difference is that the 19th century saw nations developing in earnest their own forms of omnipotence… bureaucracies accumulating the appropriate data, in order to build national ideologies and define various domains of knowledge for usage by the state. As pointed out in one of the course readings:

Between the middle of the eighteenth and the middle of the nineteenth centuries, there arose a new kind of empiricism, no longer bound by the scale of the human body. The state became a knower; bureaucracy its senses; statistics its information (Peters, p. 14).

Also tied with the aforementioned information trends, industrialization (and consequently urbanization) was also taking off at that time, along with science. And it seemed that, with enough effort, we could solve practically any problem and explain practically anything with correctly-applied techniques… even as the world was figuratively becoming bigger, and we had to figure out ways to get a handle on it! Certainly, it seemed a more appealing option than (1) putting faith in a distant omnipotent deity who likely didn’t exist or (2) getting killed for not believing “correctly.” And how could one go back to a “simpler” time, anyway?

In the 20th century, however, it became more readily apparent that an uncritical faith in progress had its own share of problems as well. The sinking of the “unsinkable” Titanic in 1912 was perhaps the first well-publicized and fabled example of this counter-perspective. Furthermore, mass production techniques facilitated the manufacture of machines for mass killing, science could be misused and manipulated to “prove” the superiority (and inferiority) of certain groups, and (courtesy of IBM) efficient systems of tabulation could be used to register people and divide them into categories. All three of these factors are intricately related to the Holocaust and, upon further reflection, the notion of the banality of evil. That horrible things happen when people are “just following orders,” or even when “good people do nothing.” It isn’t always some ur-Satan, evil genius, or “mad scientist” plotting them.

Usually, anyway.

archer-gang-15

In the course I TA’d, Nazi Germany was presented as “case study” of how our faith in progress and seemingly neutral information can go horribly wrong. Of course, it’s far from being the only example. Under Stalin’s reign, the Soviet Union placed quantifiable “progress” above human lives as well. This was the case with an artificially-created famine that killed millions of Ukrainians, after a number of them had resisted collectivized state-run agriculture schemes. No surprise, given that the Man of Steel himself is supposed to have said, “One death is a tragedy; one million is a statistic.”

Such faith in bureaucratization, efficiency, calculation, and technology in a broad sense wasn’t exclusively the provenance of obviously brutal dictatorships. Just to name a few examples from the United States, there was Frederick Taylor’s research into improving efficiency within industrial contexts, as well as innovations in the efficient manufacture of automobiles by Ford Motor Company (which Stalin himself admired greatly; wrong ideology, right technique). And it continued into the post-World War II era, with trying to get a handle on the exponential growth of “information” or “data” (typically related to science and technology) that had emerged during the war, and which had implications for the Cold War arms race. As well, coming to the fore in the 1960s and 1970s was the notion that we would enter a “post-industrial” society, where information would become a more important commodity than manufactured goods.

And that time seems to have arrived.

We’ve been told it plenty of times.

Conclusion

If one thinks about it, and as made sense from the course I TA’d (of whose content I only skimmed the surface here), the threads of bureaucracy and efficiency somehow run quite smoothly from the industrial age to the post-industrial / information age (depending on how one perceives chronological guideposts). And some concerns remain relevant as well, even if the technologies have changed quite a bit. For starters, what information is being gathered about us and why? Why do we have difficulty seeing the ideologies that underlie seemingly “neutral” systems and “neutral” calculations? Why do the positive aspects of new technologies receive more attention than their potential drawbacks… or, why aren’t they at least tempered with a sense of perspective to counter more readily available media hype?

And, of course, for those of us who work with information, what professional duty is implicit in considering all of these concerns? From what other fields might we draw ideas? How do we solve the problem of solutionism, or at least work our way through the related dilemmas and temptations that emerge? Put another way, how can we eat our apple, and have it, too?

Well, the comments section below seems a good place to start. The wonders of technology…

This entry was posted in Issues and tagged , , , , , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

*