Technology’s influence on society. What are some of the most recent technological developments in the Netherlands, what are their drawbacks, and what kind of impact do they have on people?
Influence of technology
I hope that after reading this article, you’ll have a better understanding of how technology influences people and society. In order to achieve that, I will first discuss the most relevant technological trends at the moment, and then write about some of the downsides and ethical issues of technology.
Speaker influence technology
I also frequently give lectures and presentations on the impact of technology on people, businesses and society. In May 2019 I gave a lecture at the University of Twente’s Studium Generale.
Watch the lecture below:
My lecture on the influence and impact of technology at the University of Twente.
Speaker influence technology
A lot of my inspiration for this article came from the conversations I had with tech thinkers, visionaries and trend watchers. I was inspired by conversations I had for my podcast show and YouTube channel, as well as by interesting people that I met during events, conferences and symposia.
I attended the Brave New World conferences in Leiden in 2017 and 2018 [link at the bottom]. The purpose of these events was to explore the link between the future of technology and that of humanity. The conference focuses on art, philosophy, science, business and stories.
One of the speakers at the conference in 2017 was Angelo Vermeulen, who previously featured as a guest on my podcast [link at the bottom]. In the rest of this article, however, I don’t distinguish between speakers with regards to which edition of BNW they spoke at.
Technological developments in the Netherlands
According to artist Frederick de Wilde, our society is currently undergoing a full transformation. The main driver of change in this is technology – be it genetic engineering, artificial intelligence, or neuro-, nano- or biotechnology.
It’s a complicated and, at the same time, promising era. On the one hand, there are plans to print life on Mars and scientists are able to read minds using a computer [link at the bottom]. On the other hand, there are still 100 million homeless people worldwide.
In short: are we concentrating our efforts and attention on the right problems? When it comes to the impact of technological developments, we usually talk about utopian or dystopian scenarios. If we don’t want to end up with a dystopian future, like in movies such as Blade Runner 2049 or The Matrix or Mad Max, we have to think about the impact of technological developments now.
We shouldn’t be too conservative and risk blocking or impeding progress, but we shouldn’t be too progressive and make choices that we’ll end up regretting either. That’s exactly why I am interested in new technologies and, in particular, in the possible consequences and implications that they could have. In particular, I’m interested in technology’s impact on people, companies, organizations, the government and society. What kind of practical questions do technical and technological developments raise? But also: what types of ethical issues might arise?
Technology thinker Kevin Kelly is optimistic about this. His vision is that we, as humans, will get better at dealing with ethical issues, as the progress of artificial intelligence forces us to program rules into our software. Nell Watson is currently working on a project about crafting rules into technology [read more about her initiative below].
In this section, I will write about the most important technological developments of the moment.
Technological developments 2020
I previously wrote an article about trendwatchers, where I talked about the most important developments for this year and next year [link at the bottom]. This also came up as a hot topic of discussion at the conference in Leiden. Of all the technological developments that are going on, these are the ones that I think will have the greatest impact:
- Sensor technology
- Internet of Things
- Big data and artificial intelligence
- Human enhancement
#1 The increased use of sensor technology, both in the public space (think of ‘smart cities’) and by humans themselves (the ‘quantified self’ movement) [bottom left].
#2 The so-called Internet of Things. All kinds of devices and appliances are increasingly becoming internet-connected, not just sensors. A well-known example is that of a refrigerator that can ‘communicate’ with the supermarket when you’ve run out of milk, and order it for you.
#3 The increase in sensors and increased computing power have given rise to ‘big data’. This refers to datasets that we, as humans, can no longer analyze and interpret in a systematic way. Smart algorithms (i.e.: artificial intelligence) are going to help us with this [link at the bottom].
Take Ellie. This is a software program that can use facial recognition technology to detect whether someone has depressive tendencies, and is able to train itself on how Ellie (a virtual therapist) should react to them [link at the bottom].
#4 The blockchain is a distributed general ledger. According to experts, blockchain technology will make the registration of the ownership of goods both faster and safer [link at the bottom]. The best-known example of this is the ownership of money, such as bitcoin [link at the bottom].
#5 Human enhancement stands for the expansion of human capabilities. Neil Harbisson (picture below) is a well-known example of this. He’s used technology to extend and expand his senses; he’s color-blind, but through the use of technology, he is able to hear colors through a skull implant.
In this section, I’ll write about some of the social consequences of technological progress.
Disadvantages of technological developments
What are the disadvantages of technological developments? I regularly engage in discussions and debates about this. Is technology itself inherently positive or negative? A well-known example to illustrate this tricky question is a knife. Looking at this as a technology, it’s something that can be used to cut bread, but can also be used to seriously injure someone.
In other words: I think it’s more how a technology is used or applied. Broadly speaking, there are currently a few technological developments that are said to (potentially) be dangerous.
- Artificial intelligence
- Social inequality
- Being too data-driven
#1 It’s a familiar scenario in science fiction movies: computers and machines are going to completely destroy mankind. The Brave New World conference took place at the same time as the Leiden International Film Festival. During one of the workshops, we went to dissect the film Ex Machina under the guidance of a film studies expert.
*SPOILER ALERT* In this movie, as in many other movies about the future, computers develop consciousness and realize that they no longer need humans.
I’ll elaborate on this more in my article on artificial intelligence, but I will say: this scenario is not very likely. The arms race between people, countries and other actors concerning artificial intelligence probably poses a bigger threat [link at the bottom].
#2 When it comes to technology, the discussion often revolves around the power of tech companies and the question of how much these companies know about us. The greatest danger is that a lack of privacy could restrict your autonomy, e.g. because you have been classified in a certain category by a company or the government, based on your personal data. As a result, you might no longer be eligible for a loan, a passport or medical treatment in this scenario.
#3 Does technology contribute to a growing social divide? Will there soon be a divide between groups of people who do have access to technology, and groups who do not?
During his talk at the conference in Leiden, Professor Philip Brey (University of Twente) gave two concrete examples which illustrate that this is already happening. As many as 50% of women in South Korea between the ages of 20 and 30 have undergone plastic surgery. Does that mean that you don’t belong or fit in if you haven’t had any cosmetic surgery?
Or take doping, which has ruined many lives and sports. Is it still possible to have a level playing field when some athletes secretly use doping, thereby also putting their own health at risk?
#4 Evgeny Morozov is a critical thinker who focuses on the downsides of technology. His book To Save Everything, Click Here was published in 2014 [link at the bottom]. According to him, the tendency of governments and companies to simply want to improve everything, ignores the complexity and interconnectedness of some of the problems we’re dealing with.
One of the most extreme ideas that illustrates this point is called the Bincam, a system that takes pictures when you throw something in the trash and then analyzes the images [link at the bottom]. The idea behind it is that people will throw away less waste, through gamification. Would you feel comfortable with companies keeping an eye on your trash can?
In this part I’ll write about the societal consequences of technological progress.
Technological developments influence social developments, although societal changes aren’t just driven by technology alone either. The following social developments do have a clear link to technology:
- Social unrest
- Fake news
#1 Social unrest. In my podcast with Yuri van Geest, we talked extensively about robotization: a process that will cause a large number of jobs to disappear in the next decades. According to Yuri, this will mainly affect the middle class, and it is already leading to social unrest. This can also be seen from the results of several elections, such as the presidential elections in the United States in 2016.
#2 Fake news. Around the same time as the 2016 elections in the United States, the theme of fake news emerged. This term will become even more relevant over the next few years, given that smart software now enables you to make it look like someone said something that they never actually said [link at the bottom]. The authenticity of the news source and trust in the medium will become even more important.
Societal developments that emerge cannot always be linked directly to technology. The same goes for solving social issues. In his book To Save Everything, Click Here, for example, Morozov writes about ‘solutionism’ [link at the bottom]. This is the tendency to simplify problems, place them outside of their context and expect technology to provide the perfect solution.
At the same time, the relationship between technology and society is actually something that the business sector is focusing on at the moment. Boris de Ruyter spoke at the Brave New World conference in Leiden. He is a researcher at Philips, where he researches humans’ interaction with technology. He sees the following development taking place: we’re moving from a focus on technology (technology-centered), to the user (user-centered), to the user’s experience (experience-centered), to the broader impact that technology has (societal-centered).
However, I do have to point out that Philips’ primary interest is still its turnover. It’s a nice bonus when Philips’ products have a positive influence on society, but companies will only concern themselves with this if they’re also making enough money.
In this section I will write about the consequences of technological progress, including human enhancement and the convergence of biology and technology.
Ethics and technology
Using technology to improve and enhance ourselves is something of all times. Even the invention of the wheel, fire, clothes and the toilet illustrate that we have a long history of doing so. But according to Professor Brey, human enhancement is fundamentally different, because an individual’s choice to enhance themselves can also have direct and indirect implications for others.
Human enhancement will give rise to ethical issues which, in his opinion, we should already start to think about now. Think of aspects such as health, safety, equality, identity, social differences and autonomy.
A particularly interesting ethical theme that he mentioned is convenience versus effort. If we no longer need to study hard to become smarter or train hard to become stronger, how will that affect us as humans? And how will it impact society?
That reminded me of a question Valerio Zeno asked me when he interviewed me for his TV show Valerio4ever. ‘If there’s a pill that keeps me healthy, doesn’t that mean that I can eat badly for the rest of the day, that I don’t need to move, and that I can fill myself with alcohol?’
This could lead to traits like intelligence and physical strength being ‘for sale’ for people with money, in turn contributing to increased social inequality.
Biology and technology
We don’t have to use technology exclusively for modifying humans. Swedish science journalist Torril Kornfeldt wrote the book The Re-Origin of Species [link at the bottom]. George Church (Harvard University) is looking into the possibility of using CRIPR/cas9 technology to reproduce the DNA of mammoths. In theory, once their DNA has been reproduced, we would be able to bring this species back to life.
The question that comes to mind, for me personally, resembles the dilemma I mentioned before of convenience versus effort. If we can easily correct our mistakes by simply using technological solutions, e.g. with regards to environmental pollution, wouldn’t this lead us to behave even more irresponsibly towards ourselves, nature and the planet?
In this part I’ll write about the impact of progress on society. Among others, I also look at an example concerning China.
Technology’s impact on society
What impact does technology have on society? Entire systems are currently undergoing fundamental changes and transitions. The consensus at the conference in Leiden was that we shouldn’t generalize when we consider technology’s impact. Otherwise, we might end up with a so-called ‘technopoly’. According to communication scientist Neil Postman, this concept refers to a society ‘that seeks its seeks its gratification in technology, finds its satisfactions in technology, and takes its orders from technology.’
But it’s not that black and white. In a way, toilets and clothes are also forms of technology that we are very happy with now. We have to assess and look at technology on a case-by-case basis. One example that came up at the conference is the possession and use of weapons. In the United States, you can own a weapon; in Europe, in principle, only the army and the police are allowed to have weapons. In the end, those are decisions that were made by society.
On the other hand, you could also say that we are already on a sliding scale in some domains. Is it still possible to bring this to a halt?
Looking at our mobile phones, we have already outsourced a large part of our memory, thought processes and decisions. If I take myself as an example: I use Evernote to remember everything (external memory), I ask Siri to look up information for me (external thinking) and I have an app that helps me to navigate through a city (external decisions).
Ned Ludd was a textile worker from England who destroyed two mechanical looms in 1779. Later on, he became known as an almost mythical laborer who wanted to bring the advancing industrialization to a halt. The ‘Luddites’, Ludd’s supporters, were afraid that industrialization would threaten their skills and livelihood.
As Jaap Tielbeke writes in an article in Dutch weekly De Groene Amsterdammer, the Luddites have been a symbol of the fierce battle against innovation ever since [link at the bottom]. But they weren’t the first to do so, as the following list illustrates:
- In 400 BC, the Greek philosopher Plato warned that the invention of writing would lead to forgetfulness;
- Some women didn’t dare to travel by train at first, because they were afraid that their wombs would fly out of their bodies;
- People were afraid that evil spirits could enter their living rooms through telephone cables.
These predictions turned out to be false. Does that mean that perhaps, the current doomsday predictions about the influence of technology aren’t true either?
Evgeny Morozov, who strikes a similar tone, is critical of the prevailing view that the internet is the ultimate technology and the ultimate network. Of course, the internet ensures that people, groups and institutions can connect with each other more quickly and easily. This facilitates better communication and innovation. But it’s not the internet itself that brings this about. It’s still people and companies who come up with ideas, develop products and create services that allow this.
A concept that is related to this is ‘epochalism’. This term alludes to the idea that we live in a unique time, in which different rules apply than in the past. A good example is the idea that prevailed about electricity around 1852. People believed that with the construction of electrical wiring, for the introduction of electricity, would lead to ‘social harmony’.
People tend to think of every era that it’s a unique era, but that’s not true. Of course, nowadays we can’t do without electricity and we can barely go without internet. But according to Morozov, it’s not right to see technology’s impact on progress and innovation as a one-way street; technology and progress influence each other in complex and diffuses ways.
Looking back at the past, history has not necessarily been positive for humanity. In his book Sapiens, Harari writes that ‘history’s choices are not made for the benefit of humans. There is absolutely no proof that human well-being inevitably improves as history rolls along.’
Even more so than at any other time in the past, everything is subject to change. Of course, in some aspects, we as humanity are certainly making progress. Take anaesthesia; this is an innovation that we can definitely no longer do without.
Another interesting point in Harari’s book is that science and innovation always develop in ways that are shaped by politics, economy, religion and culture. Even when scientists feel completely free to conduct their research, the allocation of budgets and subsidies is always made based on certain criteria, such as on the impact on the economy or on ideological grounds. As a result, research into things that yield little money, but are perhaps very useful, doesn’t always take place.
An interesting example is the Luddites’ struggle, which was a political struggle. The Luddites were not defeated by technology or by their employers, but rather by the British army, which forcibly crushed their rebellion. This shows that science and technological progress are often closely linked to power, politics and other interests.
As I described in the previous section, the way in which a society deals with technology, or what it invests in, is largely culturally determined. In the Western world, for example, we take a different approach to privacy than in China. The most well-known example of this is the Social Credit System. This system was set up by the Chinese government, in cooperation with commercial parties such as Alibaba and Tencent. The system is called Sesame Credit.
The Chinese government asserts that it has implemented this system because China doesn’t have a good credit system. In the Netherlands we have a regulatory body called the Credit Registration Office, but no such thing exists in China. Now, the Chinese government has set up a system that utilizes the modern-day possibilities that exist today, using big data and algorithms.
From our point of view, the consequences of this seem extreme. For example, a low score can lead to your children being taken out of school or to being denied access to a flight. Other known exampels of factors that could influence your score include whether you’re walking your dog on a leash, if you’re visiting your parents often enough and if you’re not running any red lights.
Robin Li of the Chinese technology company Baidu caused anger among many of his fellow Chinese citizens when, in March 2017, he stated that many Chinese are all too happy to give up their privacy in exchange for safety or convenience. The interests of the state are almost always put before the interests of the individual. According to Eefje Rammeloo, who wrote about this topic in her article for De Groene Amsterdammer, the desire and need for safety and control is deeply embedded in society [link at the bottom].
While Sesame Credit demonstrates the power of the government, if we turn our focus towards the United States, the emphasis is more on the power of corporations. In the words of bestselling author Yuval Noah Harari, major technology companies such as Facebook, Amazon, Google and Apple have access to the means to ‘hack’ human beings.
The more these companies know about us, the more they can target us to entice, influence and persuade us. In other words: how they can hack into our desires and needs.
We’re currently paying for the services these companies provide with our data and our behavior. Facebook collects our likes and clicks to show us the perfect overview of news items. We feed Google every day with our searches (Google.com), our emails (Gmail), work files (Google Drive), appointments (Google Calendar) and smartphone use (Android). Of course Amazon isn’t planning to stay behind either and has even made its way into the living room with the Amazon Echo – a smart gadget that can be controlled using certain commands, such as playing a certain song or placing an order (with the same Amazon account).
This can lead to unhealthy or undesirable situations, of which the filter bubble is also an exponent. The election of Donald Trump as president of the United States in 2016 was, according to activist Eli Pariser, a clear example of the filter bubble in action. Social media algorithms, driven by likes, show social media users more of what they’ve previously reacted positively to. As a consequence, your own ideas are continuously repeated to you and you only connect with other like-minded people online.
Essentially, we’re already living in a time where we’re being watched, monitored and categorized online and offline, both by companies and by governments. This could lead to an ‘algorithm society’ or an ‘algocracy’. That’s a society in which all decisions are made on the basis of data and algorithms. Something that illustrates this idea well is the first episode of season 3 of the science fiction show Black Mirror. In this episode, called Nosedive, we get a glimpse into the life of a woman named Lacie. She seems to be living in a pastel-coloured perfect world.
However, every interaction in this fictional, futuristic world can be given a rating, going beyond services we currently use such as Uber, Airbnb, or reviewing your order from Amazon. Are you ordering a cappuccino? Then you rate the barista. Your social status can be tracked online and has real consequences: Lacie, for instance, can only book a flight if she has a certain score. The same goes for renting a car, living in a popular apartment complex or getting an invitation to a party.
Is that what the future will bring, looking at China? Or what life in Western countries will look like in ten years’ time? Or both?
Kevin Kelly is an American author and futurist. Instead of speaking of ‘technology’, he prefers to talk about the ‘technium’. This is an organism that, as some point, has acquired a certain degree of autonomy. The technium has its own will.
It’s a self-strengthening force that is constantly getting bigger and more complex and which generates progress. Not all technology is great, but as long as the advantages outweigh the disadvantages, the world will be fine, is what Kelly’s opinion boils down. He quotes biologist Simon Conway Morris: ‘Progress is not some noxious by-product of the terminally optimistic, but simply part of our reality.’
The technium is autonomous. According to Kelly, humans can try to control technology, but we will never have a full grip on it. So the best thing to do is to make sure that we ride the wave of technological progress as smoothly as possible.
In this part, I’ll write about why the government isn’t able to counterbalance the downsides to technological progress very strongly.
Many politicians and entrepreneurs believe that they should do all they can to stimulate innovation and to anticipate on the inevitability of technology as well as possible. Johan Schot, a historian of technology at the University of Sussex, also reflected on this in the previously mentioned article in De Groene Amsterdammer: ‘Oftentimes, a new technology is rolled out without any public debate preceding it. Even though the negative implications of such a technique are then borne by society as a whole.’
Schot mentions climate change as an example. Fossil fuels were at the very core of the industrial revolution, but now that we know the downsides of them, taxpayers are the ones who have to bear the costs.
He argues that a lot of technology is developed for and by the rich. ‘Technology is often approached uncritically, as a cure-all for improving the world. To this day, all ideologies share a positive outlook on technology, from capitalists to Marxists and fascists.’
Sometimes, technology seems to hit the government like a tsunami. Why are there so few counterforces against technology companies? In his book Technology vs. Humanity, Gerd Leonhard makes a few suggestions:
#1 Big profits. There is a lot of money to be made by taking advantage of exponential technology such as big data and artificial intelligence. Governments and politicians are not very inclined to get in the way of companies who are doing just that, as businesses provide a lot of tax revenue and employment.
#2 Limited legislation. There aren’t many global laws and regulations on exponential technology yet. In 2017, I attended a meeting organized by the Dutch Ministry of Justice and Security and the Public Prosecution Service, about legislation on technology. They’re having a hard time with this as well.
Naturally, one of the characteristics of laws and regulations is that they are carefully and thoroughly prepared and discussed with all parties involved. That process takes time. Another question is how governments can find a way to create general frameworks, without constantly having to jump on all new trends and developments.
An additional problem is that many companies operate on a multinational basis. Therefore, making sure that agreements are actually adhered to also requires coordination between several countries.
#3 Addiction. Exponential technology makes our lives a lot easier. A lot of technological solutions capitalize on our desire to be lazy. Some technologies, such as apps on the smartphone, are also addictive. In my podcast episode with Wouter van Noort, we talked about smartphone addiction in detail [link at the bottom].
In that light, I also understand why Gerd Leonhard is not necessarily optimistic about how we’re currently handling technology: ‘We lack precaution and foresight when it comes to the use and impact of technology.’
What role does power play when it comes to technology and the future? That’s what I will discuss in the next section.
Power and technology
In this section, I’d like to make a distinction between the influence of power and technology at the geopolitical scale and at the level of individual organizations.
Let’s start with power at the geopolitical level. Should Europe choose between the power of the state in China and the power of the market in the United States? Or is there an alternative model? Marleen Stikker, director of Waag Technology & Society in Amsterdam, is an advocate for open source. She draws a comparison with the community farms we used to have in the past, I.e. communal pastures in villages.
In a VPRO Tegenlicht documentary, she notes that many people think that open source means that everything is allowed: ‘That’s not the case. Just as in the past, the community agrees on certain rules and on how they want to deal with the code, the data and the applications.’ In order to bring the power of technology companies to a halt and to prevent us from ending up at the other side of the spectrum, with an omniscient and omnipotent government, Stikker pleads that citizens should unite.
As Stikker proposes, we can unite as citizens – or should we perhaps just trust that those in power in business and government will ultimately do what’s best for us? Although this is a very optimistic approach or outlook towards humanity, the past shows that we cannot entirely rely on optimism.
In 1945, English journalist and author George Orwell published the book Animal Farm. That was four years before he published his other masterpiece: the dystopian futuristic novel 1984. In this famous novel, he paints a gloomy picture of what mankind would look like in 1984: a totalitarian society under the control of Big Brother’s all-seeing eye, in which human freedom was completely restricted.
George Orwell was way ahead of his time with 1984, but the same goes for Animal Farm.
The book is about a commune of smart farm animals. Two pigs, Snowball and Napoleon, come up with the idea of chasing away their tyrannical boss, so that the animals can work and live as equals. At the end of the story – spoiler alert – the commune has become a dictatorship. The story is seen as a fable or allegory that alludes to the Russian Revolution, under the reign of Joseph Stalin.
In their book Nooit Af (‘Never finished’), authors Martijn Aslander and Erwin Witteveen cite Animal Farm as a warning that every (future) leader should take to heart. If an executive board, management, or director is given too much power, it has negative consequences for the entire system. In Animal Farm, the farm animals were ultimately less free than they were before the revolution took place.
This note of caution is not only true for fictional stories, but is also applicable to the business community, in public organizations and in politics. The first known case of this in the business sector was in 1494, when the exorbitant expenses of the Medici family in Florence caused their eponymous bank to go bankrupt [link at the bottom]. Current examples are the scandals at Enron (accounting fraud), ING Bank (money laundering) and Bernard Madoff (customer fraud).
Technology companies are no exception to this either; take the recent fuss around Uber and Tesla, and especially around their leadership. To zoom in on Tesla: in an interview in Dutch daily de Volkskrant, Janka Stoker, professor of Leadership and Organizational Change at the University of Groningen, stated: ‘Musk is Tesla, he is one with the company’. According to her, that level of identification is both a strength and a weakness. “Strong CEOs organize their own counterbalancing forces. Musk has gathered friends around him instead. For example, his brother Kimbal is on Tesla’s board of directors [link at the bottom].
The problem of power is not just present in the business world, but is also present in other areas. Recent examples include Beatrix Ruf’s resignation as director of the Stedelijk Museum due to a conflict of interest, the misconduct of influential casting director Job Gosschaik, and top scientists who allegedly treat their staff as tyrants [link at the bottom].
It’s too easy to put all the blame on the individuals who made these mistakes. It is more likely that the system and the environment also generate this type of behavior.
Hence the lesson from Animal Farm: it’s important that every organizational structure ensures sufficient opportunities to correct its leaders in a timely manner. Even in an era of disruptive technology, aspects such as vanity, ego, competition and cooperation remain very fundamental human traits.
This section will discuss the impact of technology, the concept of trust, and ethics.
The central question therefore remains: how should we deal with the plethora of technological developments that are rapidly approaching us? How do we intend to use artificial intelligence? How will we deal with CRISPR-cas9 and other genetic modification methods?
How does neurotechnology influence the way we look at ourselves? The implications are much broader than we can even imagine now. Scientists have not yet proven that there is such a thing as the soul. But what happens if it is demonstrated that there is a soul – or, on the contrary, when it is scientifically proven that the soul does not exist? This would have a major impact on the role of religion and spirituality in our lives.
No matter how smart computers with artificial intelligence might become, in the end it’s humans and society as a whole, at least for now, who make these deliberations.
That is also what futurist Gerd Leonhard underlines in his book. He argues that progress in the S.T.E.M. (science, technology, engineering & mathematics) field is inevitable. In the future, it will become increasingly important to see how these developments correspond with human qualities. He refers to these qualities with an acronym called C.O.R.E. This stands for creativity, compassion, originality, responsibility, reciprocity and empathy.
But even if we take into account the skills mentioned by Leonhard, we’re not quite there yet. It also remains up to us, as a society, to make decisions about how we want to use and apply technologies.
According to Professor Peter-Paul Verbeek, with whom I did a podcast series for BNR Nieuwsradio and the Dutch Financial Times, it’s a classic dilemma: ‘If you know what the social impact is, it often means that a technology has already been rolled out, meaning you are actually too late. At the same time, you’re moving too early if you kill a new technology before you know how it would work out in practice.
Back to the Luddites, the so-called opponents of progress. According to Professor Johan Schot, they weren’t necessarily against specific technologies, but rather against the consequences that these technologies had for society. According to him, this is justified, and this deliberation should still be taken into account today: “The introduction of a new technology is always accompanied by choices about what kind of society we want.”
This is also in line with what Liisa Janssens told me during an interview at Brave New World 2018. She calls this the grey area of technology. How should we deal with that area? We should start by asking questions. An innovation does not inherently only have positive consequences. “You’re solving issues, you’re not solving other issues, and maybe new problems will arise.
I’m personally not pessimistic about the future. Why shouldn’t we take advantage of scientific and technological progress? I think we should follow and pursue our curiosity. That’s where the strength of humanity lies: our ingenuity, creative thinking, and our ability to think about the ethical consequences of our decisions.
Robots, machines, bioinformatics and other developments make it possible to turn scarce goods into abundant ones. If these technologies would take over some of our work, people would be able to dedicate their time to making valuable connections, trying new things, experimenting and leading meaningful lives.
On the other hand, that also requires us, as users, to trust technology companies. But are we capable of doing that? Haven’t they damaged our trust too often already? This is something I talked about with Esther Keymolen, an assistant professor at Tilburg University.
She’s created a conceptual framework which consists of 4 Cs: the Context (the firsthand experiences of the user of a technology), the Curation (the businesses that create and develop technology also have their own interests), the Code (type of technique/technology that is used) and Codification (the rules and regulations of a technology).
Nell Watson made an interesting remark during her lecture: ‘Non-fiction is for facts, fiction is for norms and values.’ With that in mind, she is currently developing a database with ethical issues to help train algorithms with.
According to technology thinker Kevin Kelly, this step is actually extremely important: in order to program ethics into artificial intelligence, we should come to an agreement with each other on what we believe to be good and bad actions. Earlier I wrote an extensive article on technology ethics [link at the bottom].
Annelien Bredenoord, Professor in Ethics in Biomedical Innovation at Utrecht University, has a similar point of view: ‘Responsible innovation starts with an inclusive discussion with philosophers and ethicists. That includes a public debate and a political discussion, because we have to define our values, in the context of autonomy and quality of life.’
This last part zooms in on my conclusion. What does technological progress mean for us as humans and what can we do with it now?
In short, technology is not an isolated matter. It’s closely intertwined with how people and organizations apply it, as well as what we as a society consider responsible and irresponsible. Ultimately, I think Professor Luciano Floridi formulated it quite nicely: ‘No device, no matter how clever, dismisses us from our own responsibility. Humans are responsible, always.’
A related concept is that of ‘moral hazard’. This was introduced by Harvard Professor David Keith, who has mentioned it as an example in relation to climate engineering. The idea is that we might start to believe: if technology can solve the climate problem, why should we go through so much effort to reduce CO2 emissions now? Following that line of thought, the solutions offered by technology could lead to us making fewer moral choices or efforts.
It’s clear that it makes no sense to hide behind some kind of technological defeatism, as if technology is something that just happens to us. My vision is that we should keep experimenting and trying. Technological developments will probably lead to some negative consequences, but I personally don’t look at it from an overly gloomy perspective.
By taking the time – now – to think, talk and write about what technological developments mean for us, we can take our responsibility as humans to find the right applications for them. We don’t have a choice.
We want to progress, we want to try, experiment and improve ourselves. It’s in our nature.
Those characteristics that distinguish humans from computers and machines, are also what we should be focusing on right now. That’s what Andrew Keen pleaded for during his keynote as the final speaker on day two of the Brave New World conference in 2018. In an interview with Dutch daily De Volkskrant a few days before the conference, he was called the ‘antichrist of Silicon Valley’ [link at the bottom]. He has written a number of books, including his most recent one: How to Fix the Future [link at the bottom].
He quoted the book Brave New World, published in 1932, which also lends its name to the conference he spoke at [link at the bottom]. Keen noted that he sees a lot of similarities between the world that Aldous Huxley described and the present era. “There’s a techno-elite, politics seems to matter less and less and we’re addicted to technology.” The solution can be found in another similar book, the book Utopia by Thomas More from 1516.
Andrew Keen: ‘The message in that book is that we, as humans, shape the future ourselves.’ Although it might sometimes appear that way, the future does not just ‘happen’ to us.
The book mostly focuses on the question of what it means to be human, and according to Andrew Keen, this question is currently more relevant than ever. His answer: “Being human means that we are able to shape what the future looks like.”
How should we go about this? Keen refers to the past: ‘When people say this is the first time that this has ever happened in history, it usually means that they’re ignorant of history, and they just assume that what they’re doing is new.’ This remark reminds me of what Evgeny Morozov (another critical thinker on tech) describes in his books as ‘epochalism’. This concept holds that we think we’re living in a unique period of time that has its own special rules, while we actually think that in every era.
Back to Andrew Keen again. In his book, he cites five actions that we as humanity have to take in order to fix our broken future:
- Laws and regulations, e.g. the fines recently imposed by European Commissioner Margrethe Vestager on companies such as Alphabet for breaking the rules;
- Innovation, with entrepreneurs developing products and services that ‘reflect and respect’ us as human beings;
- Consumer choice. It’s also up to us, as users of technology, to make conscious choices about what we do use and what we don’t use. A concrete example: which social media do you use? And are you aware of the consequences of your use of this platform?
- Citizen engagement. In addition to any actions we should take as tech users (point 3), we should also unite in interest groups and use our political rights, for example with regards to laws and regulations (point 1).
- Education. This was a point that Andrew Keen reflected upon a bit longer, because it also has a huge influence on which choices we make (point 3) and how we approach our responsibility as citizens (point 4). According to him, education should focus on what distinguishes us from computers: skills such as empathy, creativity and the ability to ask the right questions.That way, future generations will be better prepared for the future.
Before he spoke at the conference, I recorded a video interview with Andrew Keen [see below]. I thought he was a bit standoffish during our conversation, but according to an acquaintance of mine, who knows him well, that’s a sign that he’s taking you seriously.
Either way, I asked him how he looks at the use of technology by the Chinese government. According to him, this is almost even scarier than the power of technology companies in the United States; what freedom do citizens still have?
Especially if artificial intelligence takes off the way experts expect it it. Andrew Keen: ‘The internet is nothing if you compare it to how artificial intelligence and virtual reality will soon be able to impact us as humans. In a way, virtual reality resembles Soma in the book Brave New World.’
At the same time, however, he remains positive and returns to the same point that Professor Luciano Floridi made a year earlier: humans remain responsible, no matter what. It’s up to us: we, as citizens, businesses and governments, have to make decisions about the use and application of technology. It doesn’t happen to us, but it does happen very quickly.
See the full interview:
Interview with Andrew Keen
This section contains more information in the form of videos, presentations and a reading list.
At the Brave New World conferences of 2017 and 2018, I did several interviews.
In 2017 I spoke with Neil Harbisson, Moon Ribas and Manel Muñoz about human enhancement, cyborgs and ‘transpecies’.
Interview with Neil Harbisson, Moon Ribas and Manel Muñoz
At the Brave New World Conference 2017 in Leiden, I spoke with futurist, researcher and entrepreneur Nell Watson about ‘machine intelligence’. Watch the video below:
Interview with Nell Watson
A few powerpoints of lectures and keynotes I’ve given on this topic.
Speaker influence technology
In 2019, I gave a lecture on technology’s influence and impact on society at BMS (University of Twente, Enschede).
I’ve previously written the following articles related to this topic:
I have read the following books on this subject:
- Book To Save Everything, Click Here
- Book Sapiens
- Book Technology vs. Humanity
- Book The return of the mammoth
- Book Brave New World
- Book Utopia
- Book How to fix the future
- Book 1984
- Book Animal Farm
These are courses I have followed and events I have visited:
- Conference Brave New World
- Training Biohack Academy
- Genetic modification course at VU Medical Centre
- Conference CRISPRcon
These are external links I used:
- Website Bincam
- Article about printing life on Mars
- Article about China and privacy (in Dutch)
- Article about Ellie
- Article about mind reading
- Article about creating fake videos
- Article about the Luddites
- Article interview Andrew Keen (in Dutch)
- Interview with Marleen Stikker (in Dutch)
- ING scandal (in Dutch)
- Scandal Bernard Madoff
- Scandals Uber
- Scandals Musk (Tesla)
- Interview with Janka Stoker (in Dutch)
- Scandals in museum sector, TV and science (in Dutch)
- Website Annick Elzenga (foto’s Brave New World conferences)
What are your thoughts on this? Leave a comment!