Democracy in action: Visit to the German Bundestag’s Digital Agenda Committee


“Voting is the ultimate civic duty,” my grandfather liked to say, and every election Sunday he dutifully trotted to the polling station to place his cross. When I asked him exactly where he placed it, he always laughingly replied, “In the right place!”

Now I am also of the age where I can vote, of course I also place my cross (in the right place). I firmly believe that only participants in the democratic process – those who use all opportunities at their disposal – earn the right to complain afterwards. Recently, I used one such possibility for the first time: I visited a public session of a Bundestag committee – the committee dedicated to the Digital Agenda.

A large proportion of parliamentary work is conducted within committees. These are formed at the behest of the German Bundestag, and are maintained for the duration of an entire legislative period. There are currently 23 such permanent committees, comprising parliamentarians of various political parties – proportionally represented according to the results of the last election. Within each committee, the members focus on a particular political area. They are also advised by experts to ensure that they have the appropriate facts relating to the subject matter. Sometimes, these sessions are open to the public. Citizens are not allowed to contribute; rather, they are like flies on the wall. If you have the urge to say anything upon conclusion a session, you have, for example, the possibility of writing to your representative.

Actually, even before the session, I had already enjoyed lively email contact with members of this particular committee – which looks at the extremely diverse aspects of digitalisation. I researched the committee website to determine which parliamentarians each party had assigned, before writing to them. The research assistants of the respective delegates were very active, and there followed an almost lively exchange on the subject of AI.

It was one Wednesday in March, when I finally made my way towards the Paul-Löbe-Haus, opposite the Federal Chancellery. I had already pre-registered beforehand, and I traded my passport for a visitor’s badge at reception. A lady came to collect me, and guided me – together with around 20 others – to the plenary chamber. The chamber had two levels. On the upper floor, visitors were seated on rather uncomfortable chairs (don’t let anybody tell you that democracy is comfortable!). Below us, around a desk that would have made King Arthur green with envy, sat a dozen or more committee members. At its centre were not the swords of Parzival and co., but four experts waiting for the session to begin:

Prof. Dr. Frank Kirchner of the German Research Institute for Artificial Intelligence GmbH
Fabian J. G. Westerheide of the Federal Association of German Startups e.V.
Enno Park of the association Cyborgs e.V.
Matthias Spielkamp of the association AlgorithmWatch

The session followed a strict choreography. To open, each expert was given five minutes to read out his or her statement. This was then followed by questions from the committee. Just like the experts, each of them was allocated three minutes. A giant clock in the middle of the room showed how much time remained (and hid the CDU delegation from me. I could, however, see how the delegate of the Green Party was browsing through some pictures or album covers on his mobile phone – but that was only on the fringes.)

As such, each question and answer was succinct and to the point. An idea, which I should replicate the next time my family gets together at home… But I digress. Back to the issue at hand.


What exactly is Artificial Intelligence?

Fabian Westerheide stated, “Artificial Intelligence is an umbrella term. It is used to describe the concept of making machines more similar to humans: machines that can see, hear, understand, and think as we do.”

He differentiated between Narrow Artificial Intelligence and Artificial General Intelligence. According to Westerheide, the former are “systems with very specialised functions (such as driving cars, trading in equities or answering emails); however, they cannot transfer this knowledge. Narrow AI primarily remains a specialised and trained application.” Conversely, he described the latter as “based on human-like intelligence, capable of interaction with humans,” before adding the caveat, “To date, there are no AGI’s.”

Matthias Spielkamp of AlgorithmWatch stated, “Both the concepts of ‘learning’ and ‘intelligence’ are humanisations, that is to say anthropomorphisms. The processes here [in AI] cannot be equated to human intelligence or language acquisition. Nevertheless, it is unlikely that the terms AI or machine learning will disappear from common parlance. It would be more apt to apply terms such as machine intelligence or mechanical learning. These suggest a type of intelligence and learning that differs from that of humans, rather than implying that machines imitate intelligence and learning in a way that is comparable to humans.”

He continues, “More terms will join these in future. However, it makes no sense for lawmakers to focus on terminology; rather, they should favour differentiations/categorisations that serve their purposes: in other words, a politically relevant differentiation. (…) From a policy perspective, it is relevant to identify the potential risks and benefits of automation (or in robotics: physical interaction), and how these can be avoided and supported, respectively.”

By this point, it had become abundantly clear why our system of government relies on committees and uses them to advise representatives on certain matters. They are not about helping representatives to expand their general knowledge so that they might perform better on Who Wants to be a Millionaire? Their purpose is to create a basis upon which sensible laws can be drafted. Especially in the case of future technologies, the uncertainties need to be considered as far ahead as possible to assess consequences and risks and well as the overall objective. One question raised was:


How could transparency and democratic control be guaranteed within the ethical norms that are behind the algorithms of specific AI systems?

Frank Kirchner replied, “Ethical norms are societal decisions, which need to be applied and preserved through lived ethics. Therefore, the ethical grounding of a society will generally be decisive in approaching technology. This is determined through education – both at home and in schools.”

This alone is not sufficient. Ultimately, it is the US and China that are currently leading in the field of AI development. At the moment, Chinese mobile phones or American accounting software  – and the accompanying values of those countries – are our companions through daily life. This cannot be a sustainable solution. We require the active involvement of a strong European industry in this area.

Fabian Westerheide elaborates, “AI systems are trained, which is why cultural values are very important. At the moment, most AI applications stem from the US or China. Current developments allow us to see whether an AI system thinks Chinese or American. In Europe, and especially in Germany, there is an absence of political attention for this subject. It would make sense to programme machines with a do-not-harm-humans protocol today, rather than in ten years’ time; however, no such effort is currently being made. Transparency and democratic control are difficult, as systems are primarily controlled by private enterprises or the military. Moreover, these machines do not have open source code. That means that we do not even know what is actually going on inside the machine. The thought processes of constantly-learning AI systems are a black box – comparable to the thoughts of a human, in that they only exist in the mind and cannot be observed by others. One approach to this field would be a European initiative that supervises and positively influences the development, research, and operation of artificial intelligence.”

I had never realised that Deutsche Telekom and the eLIZA-team were in some way contributing to the preservation of European-democratic values. I was a little bit proud.

On the subject of regulation, Enno Park said, “I consider regulation to have great potential only at an EU-level. Because the field of AI is extremely broad – it includes concepts as diverse as decision-making systems, social media, and prostheses and implants – the question needs to be broken down into several sub-areas and each discussed individually. I believe it is necessary to commission an institute in the long-term; giving it the task of categorisation, and of studying any effects on different fields of law in detail, before producing concepts for changes to the law based on its findings. Such an institute should also be entrusted with the additional task of observing technological development and – once strong AI” becomes likely – informing committees on any resulting need for action.”

After two hours my head was throbbing. It was reminiscent of how it felt attending difficult lectures during my days at university. The discussions were consistently comprehensive and reasoned, and I came away feeling well informed. Fortunately, I did not need to draft any laws afterwards – that task stayed in the hands of my elected representatives.

 

All statements can be read in their entirety here.

More information on the committees
More information and dates of the committee Digital Agenda
Contact for questions, registering for participation in public sessions of the committee in the Bundestag

Kommentare
Add a comment
Sorry

your browser is not up to date
to enjoy this website you will need to install a modern browser.
we recommend to update your browser and to install the latest version.

iOS users, please male sure you're running at least iOS 9.

Mozilla Firefox Google Chrome Microsoft Edge Internet Explorer