Universitätspublikationen
Refine
Document Type
- Working Paper (5)
- Article (3)
- Conference Proceeding (3)
- Doctoral Thesis (1)
Language
- English (12) (remove)
Has Fulltext
- yes (12) (remove)
Is part of the Bibliography
- no (12)
Keywords
- Internet (12) (remove)
Institute
This article documents and classifies instances of transnational intellectual property (IP) enforcement and licensing on the Internet with a particular focus on the territorial reach of the respective regimes. Regarding IP enforcement, I show that the bulk of transnational or even global measures is adopted in the context of “voluntary” self-regulation by various intermediaries, namely domain name registrars, access and host providers, search engines, and advertising and payment services. Global IP licensing is, in contrast, less prevalent than one might expect. It is practically limited to freely accessible Open Content, whereas markets for fee-based services remain territorially fragmented. Overall, three layers of IP governance on the Internet can be distinguished. Based on global licenses, Open Content is freely accessible everywhere. Plain IP infringements are equally combatted on a worldwide scale. Territorial fragmentation persists, instead, in the market segment of fee-based services and in hard cases of conflicts of IP laws/rights. All three universal norms (global accessibility, global illegality, global fragmentation) are supported by a quite solid, “rough” global consensus.
Europe is a key normative power. Its legitimacy as a force for ensuring the reign of rule of law in international relations is unparalleled. It also packs an economic punch. In data protection and the fight against cybercrime, European norms have been successfully globalized. The time is right to take the next step: Europe must now become the international normative leader for developing a new deal on internet governance. To ensure this, European powers should commit to rules that work in security, economic development and human rights on the internet and implement them in a reinvigorated IGF.
The revolution will be tweeted : how the internet can stimulate the public exercise of freedoms
(2012)
This article discusses how new technologies of communication, especially the Internet and, more specifically, social network services, can interfere in social interactions and in political relations. The main objective is to problematize the concept of public liberty and verify how the new technologies can promote the reoccupation of public spaces and the recovery of public life, in opposition to the tendency to valorize the private sphere, observed in the second half of the twentieth century. The theoretical benchmark adopted for the investigation is Hannah Arendt's theory about the exercise of fundamental political capacities in order to establish a public space of freedom, as presented in “On Revolution”. The “Praia da Estação” (“Station Beach”) case is chosen to test the hypothesis. In 2010 in the Brazilian city of Belo Horizonte, different individuals articulated a movement through blogs, Twitter and facebook, in order to protest against the Mayor’s act that banned the assembling of cultural events in one of the main public places of the city, the “Praça da Estação” (Station Square). By applying Arendt's concepts to the selected case, it is possible to demonstrate that the Internet can assume an important role against governmental arbitrariness and abuse of power, as it can stimulate the public exercise of fundamental freedoms, such as freedom of assembly and manifestation.
Law is force of order. It reacts, usually with a necessary time delay, to technological pro-gress. Only twelve years after Samuel Morse presented the first workable telegraph sys-tem in New York in 1838 and six years after the first completed telegraph line from Wash-ington to Baltimore, central European states agreed on an international framework for tel-egraphs. It has been much more than twelve years since the technologies underlying the internet’s popularity today, such as the ‘World Wide Web’, were invented. No international framework has emerged, even though normative approaches abound. There are norms that are applied to the internet, but the recognition of the existence of an underlying, structuring order is missing. This motivates the present study.
In nature, society and technology many disordered systems exist, that show emergent behaviour, where the interactions of numerous microscopic agents result in macroscopic, systemic properties, that may not be present on the microscopic scale. Examples include phase transitions in magnetism and percolation, for example in porous unordered media, biological, and social systems. Also technological systems that are explicitly designed to function without central control instances, like their prime example the Internet, or virtual networks, like the World Wide Web, which is defined by the hyperlinks from one web page to another, exhibit emergent properties. The study of the common network characteristics found in previously seemingly unrelated fields of science and the urge to explain their emergence, form a scientific field in its own right, the science of complex networks. In this field, methodologies from physics, leading to simplification and generalization by abstraction, help to shift the focus from the implementation's details on the microscopic level to the macroscopic, coarse grained system level. By describing the macroscopic properties that emerge from microscopic interactions, statistical physics, in particular stochastic and computational methods, has proven to be a valuable tool in the investigation of such systems. The mathematical framework for the description of networks is graph theory, in hindsight founded by Euler in 1736 and an active area of research since then. In recent years, applied graph theory flourished through the advent of large scale data sets, made accessible by the use of computers. A paradigm for microscopic interactions among entities that locally optimize their behaviour to increase their own benefit is game theory, the mathematical framework of decision finding. With first applications in economics e.g. Neumann (1944), game theory is an approved field of mathematics. However, game theoretic behaviour is also found in natural systems, e.g. populations of the bacterium Escherichia coli, as described by Kerr (2002). In the present work, a combination of graph theory and game theory is used to model the interactions of selfish agents that form networks. Following brief introductions to graph theory and game theory, the present work approaches the interplay of local self-organizing rules with network properties and topology from three perspectives. To investigate the dynamics of topology reshaping, coupling of the so called iterated prisoners' dilemma (IPD) to the network structure is proposed and studied in Chapter 4. In dependence of a free parameter in the payoff matrix, the reorganization dynamics result in various emergent network structures. The resulting topologies exhibit an increase in performance, measured by a variance of closeness, of a factor 1.2 to 1.9, depending in the chosen free parameter. Presented in Chapter 5, the second approach puts the focus on a static network structure and studies the cooperativity of the system, measured by the fixation probability. Heterogeneous strategies to distribute incentives for cooperation among the players are proposed. These strategies allow to enhance the cooperative behaviour, while requiring fewer total investments. Putting the emphasis on communication networks in Chapters 6 and 7, the third approach investigates the use of routing metrics to increase the performance of data packet transport networks. Algorithms for the iterative determination of such metrics are demonstrated and investigated. The most successful of these algorithms, the hybrid metric, is able to increase the throughput capacity of a network by a factor of 7. During the investigation of the iterative weight assignments a simple, static weight assignment, the so called logKiKj metric, is found. In contrast to the algorithmic metrics, it results in vanishing computational costs, yet it is able to increase the performance by a factor of 5.
This paper aims to assess the arguments that claim representative democracy may be enhanced or replaced by an updated electronic version. Focusing on the dimension of elections and electioneering as the core mechanism of representative democracy I will discuss: (1) the proximity argument used to claim the necessity of filling the gap between decision-makers and stakeholders; (2) the transparency argument, which claims to remove obstacles to the publicity of power; (3) the bottom-up argument, which calls for a new form of legitimacy that goes beyond classical mediation of parties or unions; (4) the public sphere argument, referred to the problem of hierarchical relation between voters and their representatives; (5) the disintermediation argument, used to describe the (supposed) new form of democracy following the massive use of ICTs. The first way of conceptualizing e-democracy as different from mainstream 20th century representative democracy regimes is to imagine it as a new form direct democracy: this conception is often underlying contemporary studies of e-voting. To avoid some of the ingenuousness of this conception of e-democracy, we should take a step back and consider a broader range of issues than mere gerrymandering around the electoral moment. Therefore I shall problematize the abovementioned approach by analyzing a wider range of problems connected to election and electioneering in their relation with ICTs.
Background: A web-based malaria reporting information system (MRIS) has the potential to improve malaria reporting and management. The aim of this study was to evaluate the existing manual paper-based MRIS and to provide a way to overcome the obstacles by developing a web-based MRIS in Indonesia.
Methods: An exploratory study was conducted in 2012 in Lahat District, South Sumatra Province of Indonesia. We evaluated the current reporting system and identified the potential benefits of using a web-based MRIS by in-depth interviews on selected key informants. Feasibility study was then conducted to develop a prototype system. A web-based MRIS was developed, integrated and synchronized, with suitability ranging from Primary Healthcare Centres (PHCs) to the Lahat District Health Office.
Results: The paper-based reporting system was sub-optimal due to a lack of transportation, communication, and human capacity. We developed a web-based MRIS to replace the current one. Although the web-based system has the potential to improve the malaria reporting information system, there were some barriers to its implementation, including lack of skilled operators, computer availability and lack of internet access. Recommended ways to overcome the obstacles are by training operators, making the application in an offline mode and able to be operated by mobile phone text messaging for malaria reporting.
Conclusion: The web-based MRIS has the potential to be implemented as an enhanced malaria reporting information system and investment in the system to support timely management responses is essential for malaria elimination. The developed application can be cloned to other areas that have similar characteristics and MRIS with a built-in web base to aid its application in the 5G future.
In 1957, Craig Mooney published a set of human face stimuli to study perceptual closure: the formation of a coherent percept on the basis of minimal visual information. Images of this type, now known as “Mooney faces”, are widely used in cognitive psychology and neuroscience because they offer a means of inducing variable perception with constant visuo-spatial characteristics (they are often not perceived as faces if viewed upside down). Mooney’s original set of 40 stimuli has been employed in several studies. However, it is often necessary to use a much larger stimulus set. We created a new set of over 500 Mooney faces and tested them on a cohort of human observers. We present the results of our tests here, and make the stimuli freely available via the internet. Our test results can be used to select subsets of the stimuli that are most suited for a given experimental purpose.
Until three years ago, ICT Technologies represented a main “subordinate clause” within the “grammar” of Participatory Budgeting (PB), the tool made famous by the experience of Porto Alegre and today expanded to more than 1400 cities across the planet. In fact, PB – born to enhance deliberation and exchanges among citizens and local institutions – has long looked at ICTS as a sort of “pollution factor” which could be useful to foster transparency and to support the spreading of information but could also lead to a lowering in quality of public discussion, turning its “instantaneity” into “immediatism,” and its “time-saving accessibility” into “reductionism” and laziness in facing the complexity of public decision-making through citizens’ participation. At the same time, ICTs often regarded Participatory Budgeting as a tool that was too-complex and too-charged with ideology to cooperate with. But in the last three years, the barriers which prevented ICTs and Participatory Budgeting to establish a constructive dialogue started to shrink thanks to several experiences which demonstrated that technologies can help overcome some “cognitive injustices” if not just used as a means to “make simpler” the organization of participatory processes and to bring “larger numbers” of intervenients to the process. In fact, ICTs could be valorized as a space adding “diversity” to the processes and increasing outreach capacity. Paradoxically, the experiences helping to overcome the mutual skepticism between ICTs and PB did not come from the centre of the Global North, but were implemented in peripheral or semiperipheral countries (Democratic Republic of Congo, Brazil, Dominican Republic and Portugal in Europe), sometimes in cities where the “digital divide” is still high (at least in terms of Internet connections) and a significant part of the population lives in informal settlements and/or areas with low indicators of “connection.” Somehow, these experiences were able to demystify the “scary monolithicism” of ICTs, showing that some instruments (like mobile phones, and especially the use of SMS text messaging) could grant a higher degree of connectivity, diffusion and accountability, while other dimensions (which could risk jeopardizing social inclusion) could be minimized through creativity. The paper tries to depict a possible panorama of collaboration for the near future, starting from descriptions of some of the above mentioned “turning-point” experiences – both in the Global North as well as in the Global South.
This publication aims to provide an overview on how digitalisation of communication results in societal trends such as an “always-on” culture, “shitstorms”, “fake news” and their effects on schools, media, non-governmental organisations, work and sports.
Table of Contents
Christian Reuter, Tanjev Schultz, Christian Stegbauer: Digitalisation and Communication: Societal Trends and the Change in Organisations — Preface
Daniel Lambach: Digital World and Real World – Opposites no more
Leonard Reinecke: Brave New Smartphone World? Psychological Wellbeing between Digital Autonomy and Constant Connectedness
Christian Reuter: Fake News and the Manipulation of Public Opinion
Christian Stegbauer: Tantrums on a Massive Scale, or: Could Anybody be a Victim of Social Media Outrage?
Volker Schaeffer: “We Have Always Been Living in Bubbles” The Opportunities and Risks in the Digitalisation of Media
Angela Menig, Verena Zimmermann, Joachim Vogt: Digital Transformation of the Workplace – Risk or Opportunity?
Stefan Aufenanger, Jasmin Bastian: Digital Technology in Schools
Angelika Böhling: Development Assistance Goes Digital - The Opportunities and Challenges Non-Governmental Organisations Face in Digital Communication
Josef Wiemeyer: Digital Interaction and Communication in Sports