It takes only a quick retrospective to conclude that technology has become closely linked with our daily lives and society. It has revolutionized the transfer of information; it has played its role in revolutions; and now it sees itself weaponized to foster the dismantling of democracies worldwide as well as the increasing ruthlessness of the neoliberal order.
So has data, its partner in crime, which has become an increasingly important asset to numerous fields of science. Data is how personalized content is served to you, how companies know who and how to target consumers – and sometimes, it can be powerful and damning in the proper hands.
As these two become hot commodities in our society, it becomes important to holistically consider their implications to give ourselves a better understanding of the problems at hand and how we can effectively solve them.
With this in mind, the Philosophical Society of UPLB’s recent series of educational discussions focus on technology and its implications, especially in the new normal. Of particular note are their talks on technological determinism, critical thinking, and the Industrial Revolution 4.0.
Each industrial revolution (we’re on our fourth), true to the name, brought radical changes to the means of production in its respective society. The latest one, which we happen to be witnessing, is marked by the rise of the so-called internet of things (IoT) and networks, among others. The fourth IR can be characterized, more than by society’s increasing interconnectedness, by the way it makes data incredibly valuable.
To be clear, data has been important for much of history (surveillance is one infamous example), and while the third IR brought the world wide web and globalization to prominence, the rise of Big Data is relatively recent and thus particularly relevant to this era.
At the same time, this decade alone has been marked by a string of controversies that brought our previously-unthinking trust in technology into question. Wikileaks democratized the release of sensitive information (at least for a while), while the Snowden revelations made the topic of mass surveillance into the public spotlight. Facebook in particular has allegedly been involved in electoral interference, the rise of anti-vaccine hysteria and far-right extremism, and even aided in genocide. While previous industrial revolutions have brought similar problems, it can be argued that our current problems are radically different in scope, one might find this sort of recurring theme distressing and ask whether this problem lies with humans or technology.
However, it is dangerous to confront this problem through the lens of technological determinism – a philosophical theory that suggests that technology is responsible for determining the values of development in human society. It focuses on the agency of both humans and technology, and humans’ capability to freely create and then alter technology after the fact as well as technology’s capability to achieve autonomy, free will and influence and to effect change in human affairs.
How we approach our relationship with technology can have considerable consequences, and the most notable example is in the homeland of tech innovation, Silicon Valley. Its ideologues’ libertarian approach to free markets mixed with a belief in technological determinism resulted in the view of progress becoming the result of free-market driven technology innovations, rather than coming from existing power structures (Barbrook & Cameron, 1995).
The result has been a belief not only in the ability to fix problems through ruthless innovation and by extension the arrogance of tech’s top men in their ability to do so. Look no further than the persistence of opaque “algorithms” in content moderation – to the detriment of creators and consumers alike. But the real threat is that our most pessimistic opinions on technology automatically take on a technologically deterministic viewpoint. The belief for instance that social media is negative stems from this: it assumes that the problems of social media are inherent within the medium itself and disregards any influence on the end of the user.
In that case, does technology foster online harassment? It also precludes any hope of prefiguration – the belief that we must act according to the society we wish to see is obviously a rendered moot if you think humans inherently can’t do anything. It paints a very grim picture of the future to think that we are, in the end, slaves to our own inventions.
Ultimately the question remains: what does this say about tech and data, and our relationship with the former? If ruthless technological determinism has contributed to the dangerous digital landscape we are in, do we look to its inverse for a solution? Perhaps there is value in an anti-deterministic viewpoint on human agency. And if technology as well cannot affect our affairs, then the only entity that can is ourselves. [P]
Photo by Gerard Laydia