«The more full of tech the world is, the more it craves for ethics. Because what matters is not figuring out whether or not we must be afraid of robots, but how to manage digital society in a coordinated manner». A conversation with philosopher Luciano Floridi, director of the Digital Ethics Lab at the University of Oxford. Who in ’95 already warned: «The Internet promotes the growth of knowledge while creating forms of unprecedented ignorance».
Luciano Floridi has never been tender with technology. In ’95, when the web as we know it today did not exist and he was a PhD Philosophy student, he wrote things such as these. «No one controls the system globally, and the very structure of the internet ensures that will ever be able to control it in the future». Or «the Internet promotes the growth of knowledge while creating forms of unprecedented ignorance».
«Nobody listened to me», laughs Luciano Floridi today, from the top of his CV. He directs the Digital Ethics Lab at the University of Oxford. is president of the Data Ethics Group of the Air Turing Institute. and serves as advisor to big tech, governments and the European Union.
Yet it is precisely by virtue of his critical beginnings (which led him to deal with the ethics of the web for more than 25 years) that today Luciano Floridi is «overall hopeful about the future». And also optimistic, unlike another great Internet thinker Evgeny Morozov (read here a conversation with him, published on design@large) on the role of Europe on the topic of Artificial Intelligence.
Our data is in the hands of a few companies that consequently have a monopoly on AI (on this, read here). Why we should not worry?
Luciano Floridi: «. At the moment commercial data (on behavior, habits, movements) are in the hands of tech giants. And institutional ones (on tax, health, property) belong to states. In front of this situation there are those who are obsessively worried about any government interference on private subjects (Americans, for instance). And who are concerned about the opposite (we Europeans). Both are legitimate yet often extreme concerns. And, taken individually, they fail to make us grasp Which is, in both cases, the lack of control».
Why is control a bigger issue than ownership when it comes to Big Data?
«In an ideal situation, companies do their jobs and collect commercial data. And then supply them to the State when they are needed for social purposes. For example to contrast tax evasion, terrorism, the exploitation of migratory flows, or for scientific research. But the condition for all this to work is control. Control on the State is called democracy: when it is solid, functioning, and guided by a forward-looking policy. And, already on this point, in many cases, we are not really there, even in Europe.
«The problem with Big Data is not so much who owns them but the control one can have on how they them»
On private individuals, control should be provided by the Anti-trust. But clearly something went wrong when the Internet arrived because Amazon, Facebook and Google have no competitors. They are like that one bar in the provincial town: either you go and drink there or you stay home alone».
Evgeny Morozov made me understand, in a recent interview (click here), that Europe has little hope of regaining this control. Would you agree?
«No. In fact . In fact, Europe has just created the AI4People platform. Its purpose is not to define rules but to . And its recommendations will be presented at the Summit on Artificial Intelligence of the European Parliament this fall. But AI producers are also moving towards self-regulation. With the AI for Social Good Partnership which includes big but also many small international players. These are important signals, which show that those who hold the data – states and companies – are focusing on the problem».
Why should companies jeopardize a system that benefits them?
«Some recent political and social events have opened the eyes of the big techs. The election of Trump, Brexit, the spreading of fake news, social anger … The most mature companies have long understood that the advantages on the short range do not pay off in the long run. Above all if the health of society, which is the first condition for economic prosperity, is jeopardized. And it is no coincidence that the giants are giving up the culture of “I want it all and now” with a more forward-looking perspective. I think of Zuckerberg who used to say “let’s give people what they want”. But who now talks about helping society grow. Because those who know history are shivering. When the shifting events occur (and the arrival of digital technology is one), society re-adjusts itself but normally only after wars, revolutions and lots of blood. The aim, now, should be to get there before these events happen».
And what about exercising control over the other data holder, the State?
«Here the matter is more complex because the world is in a desperate need of good politics. Which should first clarify that digital is not a technological, business or communication issue. Digital development should not be a point in a political agenda but a political agenda in its own right. . An infosphere in which one is perpetually between two things, and connected to one another.
«Those who know history are shivering. When the shifting events occur (and the arrival of digital technology is one), society re-adjusts itself but normally only after wars, revolutions and lots of blood. The aim, now, should be to get there before these events happen»
If we looked at the Internet in these terms, we would realize that being disoriented is human. And that those who do not question or worry simply did not understand what is happening. We are in front of an overwhelming revolution. As life changing as the agricultural (which took millennia to settle) and the industrial one (which also took centuries). Now we are in the first decades of this change. . .
What defines a “good” policy?
«The commitment to work on a digital human project. That means: a form of human life – individual, collective, and public – that a society presents and promotes as desirable. At least in theory or implicitly, and depending on historical moments. It is plausible that each human project is not entirely feasible, or is only minimally so. And therefore should be understood only as a regulatory ideal, as an aim. But certainly t».
Isn’t that surprising? There is more need of philosophy and ethics today, in the hyper-tech world, than ever before …
«It’s very true. As I said, a quarter of a century ago, when we were writing about ethics in relation to technology, there was not exactly the line to listen to us. Now, however, sooner or later everyone knocks on the the “ethicist”‘s door. The growing attention to ethics – which should guarantee the human dimension of the social project – is the reason why, in the end, we should be optimistic».