National Consumers League

Developing a pro-consumer approach towards offering defaults and controls to consumers

Polly Turner-Ward

By NCL Google Public Policy Fellow Pollyanna Turner-Ward

This blog post is the fourth of a series of blogs offering a consumer perspective on developing an approach towards consumer privacy and data security.

This commentary is the product of a deep dive into the National Telecommunication and Information Administration’s (NTIA) September Request For Comments (RFC), a key part of the process that informs the government’s approach towards consumer privacy. Stakeholder responses to the RFC provide a glimpse into where consensus and disagreement lies on key issues among major consumer and industry players.

To operationalize the widely recognized Fair Information Practice Principles (FIPPs), the NTIA advances a risk-based approach towards privacy and data security. The NTIA approach aims to afford flexibility to organizations in achieving outcomes such as user transparency, control, and access when building privacy protections into products and services. It proposes for users to have “qualified access [sic] to data that they have provided, and to rectify, complete, amend, or delete this data.” However, Asian-Americans Advancing Justice-AAJC's comments expressed concerned that the vague nature of the phrase “qualified access” may be limiting for individuals. To determine which defaults and controls to offer consumers when making data practice decisions, the NTIA proposes for organizations to consider factors such as the risk of privacy harm (as analyzed in an earlier blog of this series), information sensitivity, user expectations, and the context of the transaction/data flow. The next few blogs of this series will investigate this approach.

Information sensitivity

By proposing to provide “sensitive” information with heightened protections, the NTIA presumes that the disclosure of such information accompanies the greatest risk of harm to consumers. This aligns with the position of the Federal Trade Commission (FTC), which views certain categories of data as highly sensitive. Slightly narrower than the General Data Protection Regulation (GDPR), these include “at a minimum” personal information relating to health, finances, children, geolocation, and social security number. Detailed data regarding individual television habits may also be considered sensitive, as well as web-browsing and application usage history.  

However, as technology evolves and harms arise from new and unexpected data categories, this list may become quickly outdated. As an alternative, Google’s Framework for Responsible Data Protection Regulation ties information sensitivity to individual and group harm. However, privacy harms are often difficult to quantify, and Google’s approach seems to ignore the fact that each and every data point may become sensitive in aggregated situations. For instance, Facebook “likes” may be aggregated to build personality profiles for the purposes of persuading voters. If this isn’t a “sensitive” use of personal information, frankly I don’t know what is. Inferences drawn from “non-sensitive” information may also form the basis for decisions that have significant effects on people’s lives. For example, if an individual were to purchase a wig and cancer medication, their shopping history (non-sensitive) may be used to infer health-related information (sensitive).

However, the idea that all data are the same and should be treated the same is scary because it means that we don’t take special concern for especially sensitive data. NCL agrees with the Center for Democracy and Technology (CDT) that for sensitive personal data, there should be purposive opt-in requirements combined with a strong legal presumption against further sharing or use for purposes unrelated to the service provided. Therefore, rather than doing away with the distinction between sensitive and non-sensitive information, the examples given above could be tackled by extending protections to inferential information, rather than merely to information provided directly by individuals to organization.

As already touched upon in the first blog of this series, sensitive inferences can be consequential for individuals, groups, and society. A huge concern about the current digital ecosystem is increasingly intrusive and pervasive commercial surveillance, which, according to the Center for Democracy and Technology, has the effect of controlling and manipulating the thoughts, behaviour, and attitudes of consumers and citizens. This threatens ideals of self-determination, fairness, justice, and equal opportunity. To promote these fundamental values in online commerce, the Center for Digital Democracy suggests that individuals need access to the information necessary to prove claims of discrimination. Companies therefore must develop systems to enable consumers to access and correct profiles about them.

Aside from aggregation issues, information sensitivity is also a highly personal, subjective, and arbitrary concept. Public Knowledge argues that permitting companies to selectively grant user control is inconsistent with the principles of user control and ownership of personal information. They suggest that protections should be provided for all personal information and consent should be as granular as possible. This would enable consumers to elect privacy settings which reflect the fact that each person defines what is and is not sensitive to them. Providing granular consent options would also enable individuals to consent to use of their data for research purposes but not for targeted advertising, or vice versa. GDPR already requires some level of granularity in terms of notice and consent. Many companies have therefore already had to figure out how to offer users nuanced consent opportunities. This type of thinking could be extended to the U.S.

Along with information sensitivity and privacy risks and harms, the NTIA’s risk-based approach towards consumer privacy and data security also encourages organizations to consider user expectations and the context of the data flow when building privacy protections into products, services, and business models. The next blog will explore these other factors for organizations to consider when determining which defaults and controls to offer consumers.

Click the following link to read the fifth entry of this series.

The author completed her undergraduate degree in law at Queen Mary University of London and her Master of Laws at William & Mary. She has focused her career on privacy and data security.