Refine
Document Type
- Article (7)
- Conference Proceeding (3)
- Part of a Book (1)
- Habilitation (1)
- Preprint (1)
- Report (1)
Language
- English (14)
Has Fulltext
- yes (14)
Is part of the Bibliography
- no (14)
Keywords
- COVID-19 (2)
- GDPR (2)
- Pokémon Go (2)
- privacy concerns (2)
- APCO (1)
- Anonymity Services (1)
- Biomarker (1)
- Contact Tracing App (1)
- Corona Warning App (1)
- Corona-Warn-App (1)
Institute
- Wirtschaftswissenschaften (12)
- Informatik und Mathematik (1)
- Medizin (1)
Privacy concerns as well as trust and risk beliefs are important factors that can influence users’ decision to use a service. One popular model that integrates these factors is relating the Internet Users Information Privacy Concerns (IUIPC) construct to trust and risk beliefs. However, studies haven’t yet applied it to a privacy enhancing technology (PET) such as an anonymization service. Therefore, we conducted a survey among 416 users of the anonymization service JonDonym [1] and collected 141 complete questionnaires. We rely on the IUIPC construct and the related trust-risk model and show that it needs to be adapted for the case of PETs. In addition, we extend the original causal model by including trust beliefs in the anonymization service provider and show that they have a significant effect on the actual use behavior of the PET.
Enabling cybersecurity and protecting personal data are crucial challenges in the development and provision of digital service chains. Data and information are the key ingredients in the creation process of new digital services and products. While legal and technical problems are frequently discussed in academia, ethical issues of digital service chains and the commercialization of data are seldom investigated. Thus, based on outcomes of the Horizon2020 PANELFIT project, this work discusses current ethical issues related to cybersecurity. Utilizing expert workshops and encounters as well as a scientific literature review, ethical issues are mapped on individual steps of digital service chains. Not surprisingly, the results demonstrate that ethical challenges cannot be resolved in a general way, but need to be discussed individually and with respect to the ethical principles that are violated in the specific step of the service chain. Nevertheless, our results support practitioners by providing and discussing a list of ethical challenges to enable legally compliant as well as ethically acceptable solutions in the future.
In order to address security and privacy problems in practice, it is very important to have a solid elicitation of requirements, before trying to address the problem. In this thesis, specific challenges of the areas of social engineering, security management and privacy enhancing technologies are analyzed:
Social Engineering: An overview of existing tools usable for social engineering is provided and defenses against social engineering are analyzed. Serious games are proposed as a more pleasant way to raise employees’ awareness and to train them.
Security Management: Specific requirements for small and medium sized energy providers are analyzed and a set of tools to support them in assessing security risks and improving their security is proposed. Larger enterprises are supported by a method to collect security key performance indicators for different subsidiaries and with a risk assessment method for apps on mobile devices. Furthermore, a method to select a secure cloud provider – the currently most popular form of outsourcing – is provided.
Privacy Enhancing Technologies: Relevant factors for the users’ adoption of privacy enhancing technologies are identified and economic incentives and hindrances for companies are discussed. Privacy by design is applied to integrate privacy into the use cases e-commerce and internet of things.
The aim of this study was to identify and evaluate different de-identification techniques that may be used in several mobility-related use cases. To do so, four use cases have been defined in accordance with a project partner that focused on the legal aspects of this project, as well as with the VDA/FAT working group. Each use case aims to create different legal and technical issues with regards to the data and information that are to be gathered, used and transferred in the specific scenario. Use cases should therefore differ in the type and frequency of data that is gathered as well as the level of privacy and the speed of computation that is needed for the data. Upon identifying use cases, a systematic literature review has been performed to identify suitable de-identification techniques to provide data privacy. Additionally, external databases have been considered as data that is expected to be anonymous might be reidentified through the combination of existing data with such external data.
For each case, requirements and possible attack scenarios were created to illustrate where exactly privacy-related issues could occur and how exactly such issues could impact data subjects, data processors or data controllers. Suitable de-identification techniques should be able to withstand these attack scenarios. Based on a series of additional criteria, de-identification techniques are then analyzed for each use case. Possible solutions are then discussed individually in chapters 6.1 - 6.2. It is evident that no one-size-fits-all approach to protect privacy in the mobility domain exists. While all techniques that are analyzed in detail in this report, e.g., homomorphic encryption, differential privacy, secure multiparty computation and federated learning, are able to successfully protect user privacy in certain instances, their overall effectiveness differs depending on the specifics of each use case.
Background: Recognizing patients at risk for pulmonary complications (PC) is of high clinical relevance. Migration of polymorphonuclear leukocytes (PMN) to inflammatory sites plays an important role in PC, and is tightly regulated by specific chemokines including interleukin (IL)−8 and other mediators such as leukotriene (LT)B4. Previously, we have reported that LTB4 indicated early patients at risk for PC after trauma. Here, the relevance of LTB4 to indicating lung integrity in a newly established long-term porcine severe trauma model (polytrauma, PT) was explored.
Methods: mTwelve pigs (3 months old, 30 ± 5 kg) underwent PT including standardized femur fracture, lung contusion, liver laceration, hemorrhagic shock, subsequent resuscitation and surgical fracture fixation. Six animals served as controls (sham). After 72 h lung damage and inflammatory changes were assessed. LTB4 was determined in plasma before the experiment, immediately after trauma, and after 2, 4, 24 or 72 h. Bronchoalveolar lavage (BAL)-fluid was collected prior and after the experiment.
Results: Lung injury, local gene expression of IL-8, IL-1β, IL-10, IL-18 and PMN-infiltration into lungs increased significantly in PT compared with sham. Systemic LTB4 increased markedly in both groups 4 h after trauma. Compared with declined plasma LTB4 levels in sham, LTB4 increased further in PT after 72 h. Similar increase was observed in BAL-fluid after PT.
Conclusions: In a severe trauma model, sustained changes in terms of lung injury and inflammation are determined at day 3 post-trauma. Specifically, increased LTB4 in this porcine long-term model indicated a rapid inflammatory alteration both locally and systemically. The results support the concept of LTB4 as a biomarker for PC after severe trauma and lung contusion.
We investigate privacy concerns and the privacy behavior of users of the AR smartphone game Pokémon Go. Pokémon Go accesses several functionalities of the smartphone and, in turn, collects a plethora of data of its users. For assessing the privacy concerns, we conduct an online study in Germany with 683 users of the game. The results indicate that the majority of the active players are concerned about the privacy practices of companies. This result hints towards the existence of a cognitive dissonance, i.e. the privacy paradox. Since this result is common in the privacy literature, we complement the first study with a second one with 199 users, which aims to assess the behavior of users with regard to which measures they undertake for protecting their privacy. The results are highly mixed and dependent on the measure, i.e. relatively many participants use privacy-preserving measures when interacting with their smartphone. This implies that many users know about risks and might take actions to protect their privacy, but deliberately trade-off their information privacy for the utility generated by playing the game.
Privacy and its protection is an important part of the culture in the USA and Europe. Literature in this field lacks empirical data from Japan. Thus, it is difficult– especially for foreign researchers – to understand the situation in Japan. To get a deeper understanding we examined the perception of a topic that is closely related to privacy: the perceived benefits of sharing data and the willingness to share in respect to the benefits for oneself, others and companies. We found a significant impact of the gender to each of the six analysed constructs.
When requesting a web-based service, users often fail in setting the website’s privacy settings according to their self privacy preferences. Being overwhelmed by the choice of preferences, a lack of knowledge of related technologies or unawareness of the own privacy preferences are just some reasons why users tend to struggle. To address all these problems, privacy setting prediction tools are particularly well-suited. Such tools aim to lower the burden to set privacy preferences according to owners’ privacy preferences. To be in line with the increased demand for explainability and interpretability by regulatory obligations – such as the General Data Protection Regulation (GDPR) in Europe – in this paper an explainable model for default privacy setting prediction is introduced. Compared to the previous work we present an improved feature selection, increased interpretability of each step in model design and enhanced evaluation metrics to better identify weaknesses in the model’s design before it goes into production. As a result, we aim to provide an explainable and transparent tool for default privacy setting prediction which users easily understand and are therefore more likely to use.
This paper provides an assessment framework for privacy policies of Internet of Things Services which is based on particular GDPR requirements. The objective of the framework is to serve as supportive tool for users to take privacy-related informed decisions. For example when buying a new fitness tracker, users could compare different models in respect to privacy friendliness or more particular aspects of the framework such as if data is given to a third party. The framework consists of 16 parameters with one to four yes-or-no-questions each and allows the users to bring in their own weights for the different parameters. We assessed 110 devices which had 94 different policies. Furthermore, we did a legal assessment for the parameters to deal with the case that there is no statement at all regarding a certain parameter. The results of this comparative study show that most of the examined privacy policies of IoT devices/services are insufficient to address particular GDPR requirements and beyond. We also found a correlation between the length of the policy and the privacy transparency score, respectively.
Background: The German Corona-Warn-App (CWA) is a contact tracing app to mitigate the spread of SARS-CoV-2. As of today, it has been downloaded approximately 45 million times.
Objective: This study aims to investigate the influence of (non)users’ social environments on the usage of the CWA during 2 periods with relatively lower death rates and higher death rates caused by SARS-CoV-2.
Methods: We conducted a longitudinal survey study in Germany with 833 participants in 2 waves to investigate how participants perceive their peer groups’ opinion about making use of the German CWA to mitigate the risk of SARS-CoV-2. In addition, we asked whether this perceived opinion, in turn, influences the participants with respect to their own decision to use the CWA. We analyzed these questions with generalized estimating equations. Further, 2 related sample tests were performed to test for differences between users of the CWA and nonusers and between the 2 points in time (wave 1 with the highest death rates observable during the pandemic in Germany versus wave 2 with significantly lower death rates).
Results: Participants perceived that peer groups have a positive opinion toward using the CWA, with more positive opinions by the media, family doctors, politicians, and virologists/Robert Koch Institute and a lower, only slightly negative opinion originating from social media. Users of the CWA perceived their peer groups’ opinions about using the app as more positive than nonusers do. Furthermore, the perceived positive opinion of the media (P=.001) and politicians (P<.001) was significantly lower in wave 2 compared with that in wave 1. The perceived opinion of friends and family (P<.001) as well as their perceived influence (P=.02) among nonusers toward using the CWA was significantly higher in the latter period compared with that in wave 1. The influence of virologists (in Germany primarily communicated via the Robert Koch Institute) had the highest positive effect on using the CWA (B=0.363, P<.001). We only found 1 decreasing effect of the influence of politicians (B=–0.098, P=.04).
Conclusions: Opinions of peer groups play an important role when it comes to the adoption of the CWA. Our results show that the influence of virologists/Robert Koch Institute and family/friends exerts the strongest effect on participants’ decisions to use the CWA while politicians had a slightly negative influence. Our results also indicate that it is crucial to accompany the introduction of such a contact tracing app with explanations and a media campaign to support its adoption that is backed up by political decision makers and subject matter experts.