Refine
Document Type
- Conference Proceeding (2)
- Article (1)
Language
- English (3)
Has Fulltext
- yes (3)
Is part of the Bibliography
- no (3)
Keywords
- GDPR (1)
- General Data Protection Regulation (1)
- Internet of Things (1)
- Privacy Policies (1)
- ePR (1)
- ePrivacy Regulation (1)
- explainability (1)
- interpretability (1)
- machine learning (1)
- privacy preference (1)
- privacy setting (1)
Institute
Augmented reality (AR) gained much public attention since the success of Pok´emon Go in 2016. Technology companies like Apple or Google are currently focusing primarily on mobile AR (MAR) technologies, i.e. applications on mobile devices, like smartphones or tablets. Associated privacy issues have to be investigated early to foster market adoption. This is especially relevant since past research found several threats associated with the use of smartphone applications. Thus, we investigate two of the main privacy risks for MAR application users based on a sample of 19 of the most downloaded MAR applications for Android. First, we assess threats arising from bad privacy policies based on a machine-learning approach. Second, we investigate which smartphone data resources are accessed by the MAR applications. Third, we combine both approaches to evaluate whether privacy policies cover certain data accesses or not. We provide theoretical and practical implications and recommendations based on our results.
When requesting a web-based service, users often fail in setting the website’s privacy settings according to their self privacy preferences. Being overwhelmed by the choice of preferences, a lack of knowledge of related technologies or unawareness of the own privacy preferences are just some reasons why users tend to struggle. To address all these problems, privacy setting prediction tools are particularly well-suited. Such tools aim to lower the burden to set privacy preferences according to owners’ privacy preferences. To be in line with the increased demand for explainability and interpretability by regulatory obligations – such as the General Data Protection Regulation (GDPR) in Europe – in this paper an explainable model for default privacy setting prediction is introduced. Compared to the previous work we present an improved feature selection, increased interpretability of each step in model design and enhanced evaluation metrics to better identify weaknesses in the model’s design before it goes into production. As a result, we aim to provide an explainable and transparent tool for default privacy setting prediction which users easily understand and are therefore more likely to use.
This paper provides an assessment framework for privacy policies of Internet of Things Services which is based on particular GDPR requirements. The objective of the framework is to serve as supportive tool for users to take privacy-related informed decisions. For example when buying a new fitness tracker, users could compare different models in respect to privacy friendliness or more particular aspects of the framework such as if data is given to a third party. The framework consists of 16 parameters with one to four yes-or-no-questions each and allows the users to bring in their own weights for the different parameters. We assessed 110 devices which had 94 different policies. Furthermore, we did a legal assessment for the parameters to deal with the case that there is no statement at all regarding a certain parameter. The results of this comparative study show that most of the examined privacy policies of IoT devices/services are insufficient to address particular GDPR requirements and beyond. We also found a correlation between the length of the policy and the privacy transparency score, respectively.