Rocksolid Light

Welcome to novaBBS (click a section below)

mail  files  register  newsreader  groups  login

Message-ID:  

No bird soars too high if he soars with his own wings. -- William Blake


rocksolid / rocksolid.shared.security / Don't Play in Google's Privacy Sandbox

Don't Play in Google's Privacy Sandbox

<67d077750da1418691a50c74480921e0$1@www.rocksolidbbs.com>

  copy mid

https://www.novabbs.com/rocksolid/article-flat.php?id=54&group=rocksolid.shared.security#54

  copy link   Newsgroups: rocksolid.shared.security
Path: i2pn2.org!rocksolid2!.POSTED.rocksolid3!not-for-mail
From: anonu...@retrobbs.rocksolidbbs.com.remove-m8g-this (AnonUser)
Newsgroups: rocksolid.shared.security
Subject: Don't Play in Google's Privacy Sandbox
Date: Mon, 2 Sep 2019 06:09:52 +0000
Organization: RetroBBS
Message-ID: <67d077750da1418691a50c74480921e0$1@www.rocksolidbbs.com>
Mime-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: novabbs.com; posting-account="rslight.i2p"; posting-host="rocksolid3:192.241.178.238";
logging-data="2443"; mail-complaints-to="usenet@novabbs.com"
User-Agent: rslight (http://news.novabbs.com)
To: rocksolid.shared.security
X-Comment-To: rocksolid.shared.security
X-FTN-PID: Synchronet 3.17a-Linux Dec 29 2018 GCC 6.3.0
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on rocksolidbbs.com
X-Spam-Level: *
X-Rslight-Site: $2y$10$cqwnMQlI5zqxdnPBKcwsEevruwfb2IVecGQQ.6E.6L/Hscn8MbidC
X-Gateway: retrobbs.rocksolidbbs.com [Synchronet 3.17a-Linux NewsLink 1.110]
 by: AnonUser - Mon, 2 Sep 2019 06:09 UTC

To: rocksolid.shared.security
https://www.eff.org/deeplinks/2019/08/dont-play-googles-privacy-sandbox-1

Last week, Google announced a plan to “build a more private web.” The
announcement post was, frankly, a mess. The company that tracks user
behavior on over ⅔ of the web said that “Privacy is paramount to us,
in everything we do.”

Google not only doubled down on its commitment to targeted advertising,
but also made the laughable claim that blocking third-party cookies -- by
far the most common tracking technology on the Web, and Google’s
tracking method of choice -- will hurt user privacy. By taking away the
tools that make tracking easy, it contended, developers like Apple and
Mozilla will force trackers to resort to “opaque techniques” like
fingerprinting. Of course, lost in that argument is the fact that the
makers of Safari and Firefox have shown serious commitments to shutting
down fingerprinting, and both browsers have made real progress in that
direction. Furthermore, a key part of the Privacy Sandbox proposals is
Chrome’s own (belated) plan to stop fingerprinting.

But hidden behind the false equivalencies and privacy gaslighting are a
set of real technical proposals. Some are genuinely good ideas. Others
could be unmitigated privacy disasters. This post will look at the
specific proposals under Google’s new “Privacy Sandbox” umbrella and
talk about what they would mean for the future of the web.
The good: fewer CAPTCHAs, fighting fingerprints

Let’s start with the proposals that might actually help users.

First up is the “Trust API.” This proposal is based on Privacy Pass, a
privacy-preserving and frustration-reducing alternative to CAPTCHAs.
Instead of having to fill out CAPTCHAs all over the web, with the Trust
API, users will be able to fill out a CAPTCHA once and then use “trust
tokens” to prove that they are human in the future. The tokens are
anonymous and not linkable to one another, so they won’t help Google (or
anyone else) track users. Since Google is the single largest CAPTCHA
provider in the world, its adoption of the Trust API could be a big win
for users with disabilities, users of Tor, and anyone else who hates
clicking on grainy pictures of storefronts.

Google’s proposed “privacy budget” for fingerprinting is also
exciting. Browser fingerprinting is the practice of gathering enough
information about a specific browser instance to try to uniquely identify
a user. Usually, this is accomplished by combining easily accessible
information like the user agent string with data from powerful APIs like
the HTML canvas. Since fingerprinting extracts identifying data from
otherwise-useful APIs, it can be hard to stop without hamstringing
legitimate web apps. As a workaround, Google proposes limiting the amount
of data that websites can access through potentially sensitive APIs. Each
website will have a “budget,” and if it goes over budget, the browser
will cut off its access. Most websites won’t have any use for things
like the HTML canvas, so they should be unaffected. Sites that need access
to powerful APIs, like video chat services and online games, will be able
to ask the user for permission to go “over budget.” The devil will be
in the details, but the privacy budget is a promising framework for
combating browser fingerprinting.

Unfortunately, that’s where the good stuff ends. The rest of Google’s
proposals range from mediocre to downright dangerous.
The bad: Conversion measurement

Perhaps the most fleshed-out proposal in the Sandbox is the conversion
measurement API. This is trying to tackle a problem as old as online ads:
how can you know whether the people clicking on an ad ultimately buy the
product it advertised? Currently, third-party cookies do most of the heavy
lifting. A third-party advertiser serves an ad on behalf of a marketer and
sets a cookie. On its own site, the marketer includes a snippet of code
which causes the user’s browser to send the cookie set earlier back to
the advertiser. The advertiser knows when the user sees an ad, and it
knows when the same user later visits the marketer’s site and makes a
purchase. In this way, advertisers can attribute ad impressions to page
views and purchases that occur days or weeks later.

Without third-party cookies, that attribution gets a little more
complicated. Even if an advertiser can observe traffic around the web,
without a way to link ad impressions to page views, it won’t know how
effective its campaigns are. After Apple started cracking down on
advertisers’ use of cookies with Intelligent Tracking Prevention (ITP),
it also proposed a privacy-preserving ad attribution solution. Now, Google
is proposing something similar. Basically, advertisers will be able to
mark up their ads with metadata, including a destination URL, a reporting
URL, and a field for extra “impression data” -- likely a unique ID.
Whenever a user sees an ad, the browser will store its metadata in a
global ad table. Then, if the user visits the destination URL in the
future, the browser will fire off a request to the reporting URL to report
that the ad was “converted.”

In theory, this might not be so bad. The API should allow an advertiser to
learn that someone saw its ad and then eventually landed on the page it
was advertising; this can give raw numbers about the campaign’s
effectiveness without individually-identifying information.

The problem is the impression data. Apple’s proposal allows marketers to
store just 6 bits of information in a “campaign ID,” that is, a number
between 1 and 64. This is enough to differentiate between ads for
different products, or between campaigns using different media.

On the other hand, Google’s ID field can contain 64 bits of information
-- a number between 1 and 18 quintillion. This will allow advertisers to
attach a unique ID to each and every ad impression they serve, and,
potentially, to connect ad conversions with individual users. If a user
interacts with multiple ads from the same advertiser around the web, these
IDs can help the advertiser build a profile of the user’s browsing
habits.
The ugly: FLoC

Even worse is Google’s proposal for Federated Learning of Cohorts (or
“FLoC”). Behind the scenes, FLoC is based on Google’s pretty neat
federated learning technology. Basically, federated learning allows users
to build their own, local machine learning models by sharing little bits
of information at a time. This allows users to reap the benefits of
machine learning without sharing all of their data at once. Federated
learning systems can be configured to use secure multi-party computation
and differential privacy in order to keep raw data verifiably private.

The problem with FLoC isn’t the process, it’s the product. FLoC would
use Chrome users’ browsing history to do clustering. At a high level, it
will study browsing patterns and generate groups of similar users, then
assign each user to a group (called a “flock”). At the end of the
process, each browser will receive a “flock name” which identifies it
as a certain kind of web user. In Google’s proposal, users would then
share their flock name, as an HTTP header, with everyone they interact
with on the web.

This is, in a word, bad for privacy. A flock name would essentially be a
behavioral credit score: a tattoo on your digital forehead that gives a
succinct summary of who you are, what you like, where you go, what you
buy, and with whom you associate. The flock names will likely be
inscrutable to users, but could reveal incredibly sensitive information to
third parties. Trackers will be able to use that information however they
want, including to augment their own behind-the-scenes profiles of users.

Google says that the browser can choose to leave “sensitive” data from
browsing history out of the learning process. But, as the company itself
acknowledges, different data is sensitive to different people; a
one-size-fits-all approach to privacy will leave many users at risk.
Additionally, many sites currently choose to respect their users’
privacy by refraining from working with third-party trackers. FLoC would
rob these websites of such a choice.

Furthermore, flock names will be more meaningful to those who are already
capable of observing activity around the web. Companies with access to
large tracking networks will be able to draw their own conclusions about
the ways that users from a certain flock tend to behave. Discriminatory
advertisers will be able to identify and filter out flocks which represent
vulnerable populations. Predatory lenders will learn which flocks are most
prone to financial hardship.

FLoC is the opposite of privacy-preserving technology. Today, trackers
follow you around the web, skulking in the digital shadows in order to
guess at what kind of person you might be. In Google’s future, they will
sit back, relax, and let your browser do the work for them.
The “ugh”: PIGIN

That brings us to PIGIN. While FLoC promises to match each user with a
single, opaque group identifier, PIGIN would have each browser track a set
of “interest groups” that it believes its user belongs to. Then,
whenever the browser makes a request to an advertiser, it can send along a
list of the user’s “interests” to enable better targeting.

Google’s proposal devotes a lot of space to discussing the privacy risks
of PIGIN. However, the protections it discusses fall woefully short. The
authors propose using cryptography to ensure that there are at least 1,000
people in an interest group before disclosing a user’s membership in it,
as well as limiting the maximum number of interests disclosed at a time to
5. This limitation doesn’t hold up to much scrutiny: membership in 5
distinct groups, each of which contains just a few thousand people, will
be more than enough to uniquely identify a huge portion of users on the
web. Furthermore, malicious actors will be able to game the system in a
number of ways, including to learn about users’ membership in sensitive
categories. While the proposal gives a passing mention to using
differential privacy, it doesn’t begin to describe how, specifically,
that might alleviate the myriad privacy risks PIGIN raises.

Google touts PIGIN as a win for transparency and user control. This may be
true to a limited extent. It would be nice to know what information
advertisers use to target particular ads, and it would be useful to be
able to opt-out of specific “interest groups” one by one. But like
FLoC, PIGIN does nothing to address the bad ways that online tracking
currently works. Instead, it would provide trackers with a massive new
stream of information they could use to build or augment their own user
profiles. The ability to remove specific interests from your browser might
be nice, but it won’t do anything to prevent every company that’s
already collected it from storing, sharing, or selling that data.
Furthermore, these features of PIGIN would likely become another
“option” that most users don’t touch. Defaults matter. While Apple
and Mozilla work to make their browsers private out of the box, Google
continues to invent new privacy-invasive practices for users to opt-out of.
It’s never about privacy

If the Privacy Sandbox won’t actually help users, why is Google
proposing all these changes?

Google can probably see which way the wind is blowing. Safari’s
Intelligent Tracking Prevention and Firefox’s Enhanced Tracking
Protection have severely curtailed third-party trackers’ access to data.
Meanwhile, users and lawmakers continue to demand stronger privacy
protections from Big Tech. While Chrome still dominates the browser
market, Google might suspect that the days of unlimited access to
third-party cookies are numbered.

As a result, Google has apparently decided to defend its business model on
two fronts. First, it’s continuing to argue that third-party cookies are
actually fine, and companies like Apple and Mozilla who would restrict
trackers’ access to user data will end up harming user privacy. This
argument is absurd. But unfortunately, as long as Chrome remains the most
popular browser in the world, Google will be able to single-handedly
dictate whether cookies remain a viable option for tracking most users.

At the same time, Google seems to be hedging its bets. The “Privacy
Sandbox” proposals for conversion measurement, FLoC, and PIGIN are each
aimed at replacing one of the existing ways that third-party cookies are
used for targeted ads. Google is brainstorming ways to continue serving
targeted ads in a post-third-party-cookie world. If cookies go the way of
the pop-up ad, Google’s targeting business will continue as usual.

The Sandbox isn’t about your privacy. It’s about Google’s bottom
line. At the end of the day, Google is an advertising company that happens
to make a browser.

--
Posted on RetroBBS

SubjectRepliesAuthor
o Don't Play in Google's Privacy Sandbox

By: AnonUser on Mon, 2 Sep 2019

1AnonUser
server_pubkey.txt

rocksolid light 0.9.8
clearnet tor