BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Facebook Pushes The Boundaries Of Privacy Law: Is The Federal Trade Commission Up For the Challenge?

Following
This article is more than 5 years old.

MANDEL NGAN/AFP/Getty Images

The Cambridge Analytica saga produced a drama-filled Congressional hearing but no privacy legislation. The last effort to enact privacy protection, the 2012 Consumer Privacy Bill of Rights, also went nowhere. Yet voters’ appetite for privacy protection is strong: A recent Pew survey found that just nine percent of social media users were “very confident” that social media companies would protect their data, and two-thirds said current laws are not adequate to protect privacy.

The FTC enforces many privacy laws, several of which provide narrow rulemaking authority. But its broadest authority rests on policing acts of deception or unfairness, a nebulous concept in all realms, including privacy. Pursuant to those powers, in 2011, the Federal Trade Commission (FTC) secured a 20-year consent order against Facebook, under which Facebook agreed to obtain consent from its users before sharing their data with third parties. Although Facebook could, in theory, face penalties of up to $40,000 per user per day for violating the FTC’s consent order, the mere threat of fines might not meaningfully deter Facebook’s behavior.

In a new paper published by the International Association of Privacy Professionals, Justin Brookman, Director of Consumer Privacy and Technology Policy for Consumers Union, argues that that the FTC’s current powers are not sufficient to keep up with a modern tech platform’s data practices. As Brookman explains, in spite of the FTC’s existing authority and the 2011 consent decree, Facebook tracks what its users and non-users do on other websites and apps without their knowledge or permission; shares users’ private profile data with hardware manufacturers without any transparency; and shares personal data with third-party apps even after offering a control to turn it off.

Does the FTC need a new toolkit to police Facebook in particular and protect our online privacy generally? Would the current FTC really use these new powers? And what is stopping privacy legislation from moving through Congress? To answer those questions, we invited the Consumer Union’s Justin BrookmanNeil Chilson of the Charles Koch Institute (and former Chief Technologist at the FTC), and fellow Bytes contributor Jodi Beggs. The Chat was moderated by Hal Singer, editor of Washington Bytes and Senior Fellow of the George Washington Institute of Public Policy. The transcript was edited lightly for readability.

*          *          *

Hal Singer: Justin, What it is about the FTC’s consent decree with Facebook that makes it “incredibly narrow” in your view? Seems like the threat of financial penalties plus third-party audits should do something to curb Facebook’s worst impulses. Why aren’t these protections sufficient?

Justin Brookman: Sure, the potential fines are significant. But the substantive limits imposed by the order are flimsy, and the audit is just pro forma theater. The order basically says that Facebook (1) can’t lie about privacy and (2) can’t offer privacy controls that don’t work. Everything else is fair game! So the order doesn’t really get to the core of Facebook’s dubious privacy behaviors—tracking what you do on other websites and apps, tracking geolocation even when you’re not using the app, and targeting you based on what you buy in the physical world. The FTC could theoretically try to address that under general Section 5 authority, but they haven’t tried and it’s unclear if they would be successful if they did.

Neil Chilson: I’d note that I got ripped on Twitter for calling these things “audits.”  They’re technically “assessments.”

Singer: I’ve been ripped on Twitter for much less.

Jodi Beggs: I’m all about words meaning things, but I also notice that people argue over word choice when they don’t have more substantive arguments to make.

Singer: Justin’s piece also argues that FTC “needs privacy authority that extends beyond deception or murky unfairness to accord companies’ data practices with consumer preferences and reasonable expectations.” What kinds of conduct are not captured under a deception/unfairness standard? In broad strokes, what exactly would the new tools entail?

Brookman: First, a new law should by default limit data collection, retention, and sharing to what’s reasonably necessary for a service requested by the consumer. Facebook collects a lot of data about me when I use their service, and I’m fine with that! It’s snooping around off of the platform that’s unexpected. Second, there should be clearer transparency about what’s happening. Today, most privacy policies are pretty empty—they’re designed to evade liability, not provide the market with information. Privacy policies should be more like SEC filings that aren’t read by everybody, but are translated to the market through the people who study those things for a living (me).

Singer: So a reasonably necessary standard and more transparency. Anything else?

Brookman: Yes, reasonable data security, access, and portability. And of course, the FTC needs a ton more staff and stronger tools—penalty authority so violations aren’t consequence-free, and rulemaking to offer clarity to the marketplace about how the law applies to new technologies and business models.

Singer: And those things aren’t already captured by the FTC’s deception/unfairness standard?

Brookman: I mean, some of that might be captured by deception/unfairness, but a clear standard would be preferable!

Singer: Neil, where do you come down here? Are the current FTC tools, including the deception standard, sufficient in your view? Or do you agree with Justin that the FTC’s toolkit needs expanding?

Chilson: I’m pretty positive on the current U.S. approach. It’s not perfect, and there are some things that could be improved, for sure. I have a new paper that explains what to consider when thinking about federal legislation. Under the current regime, we have enterprise-grade communications services available to consumers for no money out of pocket. News services for no money out of pocket. Powerful apps often for little or no money out of pocket. The world’s leading tech sector.

Singer: Neil, it sounds like you think the FTC’s current toolkit is sufficient. What is wrong with Justin’s framework?

Chilson: Justin has essentially defined the problem as “Facebook collects too much data.” Granted, Facebook collects a lot of data. But why is that a problem, exactly? What are the costs that have been imposed on consumers that the current approach has failed to protect? The clearest costs have been the costs of data breeches resulting in identity theft. That’s a real harm, a real challenge, and worthy of our attention, but it has almost nothing to do with the types of data that Facebook, for example, collects. And it is a data security problem more than a broad privacy problem. Also, I don’t think requiring companies to ignore what they learn about how to serve their customers is a good idea.

Singer: So if you reject Justin’s reasonably necessary standard, what would you use instead?

Chilson: As I explain in my new piece, preventing consumer injury is the proper goal of privacy legislation, and legislation should directly pursue that goal. Liability should require showing actual or likely consumer injury, with material deception as per se injury. If liability hinges on injury, many of the other details of privacy legislation become less important.

Brookman: The downside to Neil’s consumer-injury standard is unwanted surveillance. Should I have no agency with regard to my privacy unless I can convince Neil of some objective harm?Other privacy laws aren’t predicated on this novel harms analysis. Wiretap laws aren't interpreted to assess whether I am harmed by someone reading my communications before finding a violation!

Singer: Jodi, you are the tie-breaker. Who wins this argument? Who wins the hearts of economists? Assuming econs have a heart.

Chilson: I don't think we should lump all econs together, heartless or not.

Singer: Jodi speaks for me and all econs in this answer.

Beggs: I think Neil does, but I’m also not sure that economists have hearts. While mandating transparency (and actually enforcing such transparency in a practical sense) is a pretty clear win since it allows consumers to make better decisions, mandating what information can be collected is in effect telling me as a consumer what data I am allowed to sell. In that sense, such regulation would be telling me that I can't sell personal information in the same way that I can’t sell a kidney.  I have no reason to think that this is automatically a desirable outcome—maybe it is, but I’d need additional information to be convinced.

Singer: Whoa. I wasn’t expecting Jodi to channel a Chicago School econ!

Beggs: I’d object to this characterization if not for the fact that Thaler is literally a member of that club nowadays. As a trivial example, if I value my privacy at $5 and a company can make $8 of profit from my data, it’s probably going to charge me an extra $8 for the product if it can’t use my data. This does not make me better off. In most policy situations we should be able to do better than bluntly banning an activity, so I’m generally skeptical of such approaches.

Singer: But you can conceive of a basis for intervention?

Beggs: When companies such as Facebook breach users’ privacy without proper compensation, they are essentially imposing a negative externality on the market for...well whatever it is Facebook produces. Externalities (e.g. pollution) are categorized as market failures, and there is strong economic justification for using policy to force the companies that produce the externalities to internalize the costs they are imposing on their users. Normally, if a consumer knew that a product had a privacy issue, it would simply lower her willingness to pay for the product.  This mechanism is problematic when it comes to companies like Facebook both because users don’t pay to use Facebook and because Facebook did not make the privacy practices transparent to the consumer.

Brookman: As Jodi noted, a lot of the data collection that’s happening is not transparent. Companies are free-riding off of lack of awareness of their practices. I’m interested to see what an informed marketplace might look like, though with regard to services insulated from competition by network effects, market pressures aren't going to be as effective.

Singer: And Jodi, what do you make of Neil’s proposed consumer-harm standard?

Beggs: “Harm” is a weird standard here, since if someone values their privacy then a breach of their privacy is by definition harm, but this isn’t really the legal definition of harm.

Brookman: YES!

Singer: So it sounds like Jodi is splitting her vote. Typical econ. On the one hand …

Chilson: I think we have to unpack “privacy.” It’s too vague a term.

Brookman: There’s an interest or an “economic good”—that is, the value that privacy law would enable people to protect and retain.

Chilson: What exactly do people value? Do they value the path they make when interacting with websites across the Internet?

Brookman: They value people not knowing what they are doing around the Internet. Or they value Facebook not turning the microphone on to listen to their conversations.

Chilson: And what do they value it relative to other things? How much do they value it, Justin? Because there is a huge body of empirical work that suggests most people don't value it much.

Brookman: That is a question we don’t know the answer to, because exercising agency over privacy (or this bundle of interests/economic goods) is extremely challenging. Which is why I suggest reordering defaults to be consistent with reasonable context.

Beggs: My impression is that people in an abstract sense don’t like feeling like they’re being watched, but I’m very curious as to whether they can (a) state their concerns more concretely, and (b) put their money where their mouths are in this context. Furthermore, what people would be willing to pay to maintain their privacy and what they would have to be paid to give up their privacy generally result in very different valuations (due to the endowment effect, among other things), and it’s not clear which is the appropriate one to use in a cost-benefit analysis.

Brookman: I don’t think it’s necessary to force people to articulate their concerns concretely, but I do think the money-mouth question is valid—that is, how much do they value it compared to other things. But I don't think they have the ability to exercise that choice today, especially with regard to services insulated from competition.

Beggs: They have the ability to implicitly exercise the choice, but it’s tied to the choice of whether to use the underlying product. Economically speaking. “Lack of privacy” and “Facebook" are bundled goods and not offered individually.

Chilson: Most of the data that Justin and others seem to be concerned about is data that is jointly created by the interaction between two entities. If we care about efficiency, we might ask who can extract the most value from that information and assign the rights there.

Brookman: I would posit it’s not a fair choice to say to someone, “Use Facebook or not (the only network that your family and friends all use), and accept all the opaque data practices that come with it.”

Beggs: I agree mainly because the monopoly nature of Facebook breaks the simplistic efficiency argument in a number of ways.

Chilson: If we are going to do property rights in data (and I think that metaphor has problems), I think the Coasian answer could be that the rights should be with the company.

Brookman: I’m mostly concerned about third-party data. I guess that’s technically data created by an interaction between two entities . . . but . . . I think I lost your point.

Chilson: The point being, why assign the right to only one of the parties?

Beggs: Rights have to be assigned in some way in order to bargain efficiently from a logistical perspective. (Rights don’t change the efficient outcome in many cases, as per Coase, but they determine who gets compensated.)

Chilson: To me, that complexity is why regulating the means of collecting data is often problematic. We should focus instead on addressing any harms. (And rights don’t change the outcome assuming no transactions costs. They most certainly do change the outcomes when there are transactions costs.)

Beggs: Right. I think one central problem is that a complex system, perhaps one that is psychologically costly to navigate, can “work” for someone who is an economic robot (I’m fine with the settings, whatever), but they are absurd from the perspective of a normal sane human person. There needs to be some sort of “reasonable person” standard for such systems similar to that in truth-in-advertising laws and such.

Singer: Let’s bring this back to something concrete. In his new piece, Justin speaks of “data permissions” as part of the mix of solutions. What are the current permissions that a website such as Facebook uses? What should those permissions look like in an ideal world? How should the defaults be set? And why can’t Facebook be entrusted to make the socially optimally choices?

Brookman: So Facebook offers a bunch of privacy controls today. But they are incredibly difficult to use! Have you ever tried to play with the Custom Audience control? It's a mess. I study privacy controls for a living and I find Facebook's privacy controls to be incredibly confusing and Byzantine. For me, I would keep Facebook's data collection by default limited to on-Facebook data—that is, what I do on Facebook. If they wanted to make an explicit value proposition for harvesting off-Facebook data—that is, what I do on ESPN, New York Times, or Angry Bird by virtue of Facebook serving a pixel on those services—for better ad targeting, good luck with that (though again, if the choice punished users for rejecting it, you run into the competition issue). As far as trusting Facebook to make socially optimal choices . . . bruh. Facebook is going to maximize value for its shareholders (and for its officers whose remuneration is heavily based on share price).

Chilson: Yeah, and I don’t understand why you want to define, in law, that the Facebook platform is what, facebook.com?

Brookman: Consumers reasonably expect that they’re sharing data with Facebook when they're on Facebook! Less so when they’re on ESPN.

Beggs: What’s kind of interesting is that we focus on the harm to the Facebook user from collecting off-platform data, but we never talk about how, say, ESPN feels on the matter. But ESPN’s privacy is violated in a similar way.

Brookman: Pour one out for ESPN’s privacy.

Beggs: I support equal opportunity for rage. If Facebook is making it so ESPN can’t sell ESPN’s on-platform data, that’s a garbage but not a terribly sympathy-generating move.

Chilson: If it’s just about consumer expectation, then focus on that, Justin.

Brookman: I think focusing on consumer expectations is a good idea. I know a lot of major publishers are certainly upset about being beholden to Facebook and Google (bwah, that’s Jason Kint’s music). 

Chilson: But I’ll add that, as mentioned earlier, my browsing history is completely worthless until combined and aggregated with many others to create an ad product that can fund free services. The economically rational choice might be to share the data.

Singer: Jodi, What does economics have to say (if anything) about the default settings? I’m thinking of the literature on bounded rationality and 401(k) choices. There, we “nudged” employees into socking away more for retirement by making participation in the plan the default selection. Do Facebook users need a similar nudge via a change in the default settings? And should it be compelled by the government?

Beggs: If people were so stressed about their privacy in an immediate sense, why would they then have to be nudged into protecting their privacy? The argument seems a little contradictory.

Singer: (stepping out of moderator role) It’s not contradictory! Users don’t know how their data is going to be exploited or can’t be bothered with changing the default settings. So they accept the status quo. And all hell breaks loose, so to speak.

Chilson: What “hell,” Hal?

Singer: Hell is being trapped in a philosophical chat with this libertarian crew—and finding no exit.

Chilson: The nudge argument relies on the assumption that consumers make the economically inefficient choice. That is not at all demonstrated in the case of privacy. Indeed, there are good reasons to think that there are social problems, such as in health care, where free-rider problems mean consumers don’t contribute enough data.

Brookman: I don’t think they need to be nudged. They should just be empowered. So defaults should be consistent with expectations and context. Far be it from me to tell someone they can’t share their life with the world and every #brand on the planet.

Beggs: (*whispers* defaults ARE nudges) In general, nudges work well for things that we know we should do but that our lizard brains really don’t feel like doing. I’m not convinced that nudging people into protecting their privacy fits this paradigm, unless you think that their lizard brains are simply too lazy to go through the settings, which I guess could be true. And the contradiction comes, Hal, in the part where someone can’t be bothered to do something that they supposedly really care about, but I guess that can be said of other nudge contexts.

Singer: Well their future self cares a lot about it, but their present self says, “I can’t be bothered.” Kinda like exercising. Or corking that wine bottle after your third glass.

Brookman: Their present self is like “I wish I understood what was going on but I don’t really have the bandwidth to deal right now.”

Beggs: I’ll buy that I guess. To be clear, I’m all for like 100 blinking signs alerting users as to what data is being collected on them, but I see that as a separate issue from nudges via default settings.

Singer: Neil, your future self wants to be muscular, clean-shaven, and doesn't want your tweets to be available for all to see. I know this for a fact, as I spoke with your future self. And he’s regretful.

Chilson: I don't think we have any evidence that consumers’ future selves care more.

Beggs: Can I borrow your time machine, Hal?

Singer: There’s a rental fee plus a small retainer.

Beggs: The crux of time inconsistency is that we never are our future selves. I think Annie has something to say about this actually.

Singer: Let’s shift gears again. Neil, in your forthcoming privacy piece, you advocate for “case-by-case enforcement frameworks where company practices are judged based on consumer outcomes.” Why do you think ex post enforcement is superior to general rule-making authority here?

Chilson: The main reason case-by-case is good is because it mitigates the regulator’s knowledge problem. Detailed legislation requires regulators to predict the future, else they risk a mismatch between the law and reality. Case-by-case narrows the inquiry to specific case, specific facts. Regulators don’t need to think in abstract hypotheticals but examine the incident in front of them. A case-by-case approach allows the regulatory body to address bad actors without attempting to design practices for an entire industry, and it enables the law to evolve alongside the technology in a much more organic, common-law-like fashion.

Brookman: I think there’s value to what Neil says, but there's also a cost to that approach as well --- the FTC is prohibited from using another tool to more clearly delineate reliable norms regarding a certain practice. Individual cases are idiosyncratic and very labor-intensive (especially if litigated).

Chilson: Sure, there are trade offs, and again, I think ultimately it goes back to how you feel about the current state of the world. Justin thinks it sucks, and I think the online world is amazing.

Brookman: I don’t think it sucks! But it’s not without its significant flaws.

Chilson: Let me add another plus for case-by-case: Case-by-case is a more permissionless environment, which enables a wider range of potential innovations, including completely unforeseen approaches. Permissioned approaches narrow innovation options, often requiring innovators to fit a new service into a pre-existing framework and established processes. This narrowing does the greatest harm in fields where innovation would otherwise be rapid, unpredictable, and disruptive.

Brookman: Neil’s point on permissionless-ness is right but also so vague and axiomatic as to be kinda useless. Yes, a free-for-all enables innovation. It may also enable terrible innovation. A rule against putting lead in paint stifles innovation!

Singer: (stepping out of moderator role for the last time, promise) The Economist had a piece in April rebutting this innovation point: “Opponents of privacy legislation have long argued that the imposition of rules would keep technology companies from innovating. Yet as trust leaches out of the system, innovation is likely to suffer. If consumers fret about what smartphone apps may do with their data, fewer new offerings will take off—especially in artificial intelligence.” Too little regs, and innovation can start to fall.

Beggs: I worry that a case-by-case approach could be exploited by lobbying power, etc.

Singer: Neil, you know that case-by-case is my solution to net neutrality and the tech platform problems. So I’m inclined to accept case-by-case for privacy enforcement. Can you think of any instances in which ex ante rules are warranted?

Chilson: There are certainly some areas where Congress has determined that ex ante rules are necessary. Credit reporting, children’s information, health information. Generally, in the data security area I think there is a better case for ex ante rules, because more people agree on what a good world looks like there.

Brookman: Ex ante rules can also provide more predictability to the marketplace, for companies and consumers.

Chilson: Out of date ex ante rules are not predictable. The GDPR is not predictable. The California Consumer Privacy Act is not predictable. The unpredictability is why companies have to spend so much on lawyers to help mitigate their legal risk.

Brookman: GDPR is not predictable, but much of that goes to uncertainty about whether European data protection authorities (DPAs) will enforce as written or not! COPPA (the Children’s Online Privacy Protection Act)  rules are predictable though (given rulemaking and extensive FAQs). You might not like them, but they’re fairly clear!

Chilson: Exactly. Ex ante rules have unpredictability in enforcement, just like ex post enforcement.

Brookman: Like ex post enforcement, but much less so. Everything is unknowable . . . (staring at hand).

Chilson: If we test by outcomes—that is, whether the consumer is harmed—that’s extremely knowable.

Singer: Justin, you argue that the FTC should be given the ability to obtain civil penalties for privacy violations (and for all violations of Section 5 for that matter), and that the penalty authority should not be subject to a cap. What is the current status of the FTC’s ability to impose fines under Section 5?  

Brookman: Today, the FTC can get civil penalties in a small subset of its cases—for example, if someone violates one of their industry-specific statutes (COPPA, Fair Credit Reporting Act) or someone violates a consent order (meaning they previously got busted for violating Section 5). So most companies get caught breaking Section 5 (already a relatively small chance of enforcement given relative size of agency), there’s no immediate consequence, just a possible PR hit, legal fees, and a possibility of significant consequences next time.

Singer: This issue of the FTC’s penalty authority came up recently in a deception case involving “Made in the USA” claims, in which certain Commissioners argued that the FTC cannot seek redress unless the agency can show that the companies charged more money because of the fraud they committed. Do you agree with those Commissioners that the FTC does not have the authority to issue monetary penalties for deception on its own?

Brookman: Absent civil penalties, at the very least, the FTC should be demanding disgorgement of ill-gotten gains when they settle a case—that is, if the company made money ripping people off, they shouldn't be allowed to keep that! In the Made in the USA cases, the FTC said they weren’t going to demand money unless people paid more money for products fraudulently described as Made in the USA. But that misses the point of disgorgement, which is that wrongdoers should not be able to profit from wrongdoing (whether consumers paid more or not)! (Here are our comments arguing as much.)

Chilson: I’d just point out that there is a legal difference between redress, which the FTC absolutely can get in deception cases. It can return money to consumers to make them whole. The agency directly returned nearly $400 million in refunds and supported the refund of $6 billion to consumers in the year ending June 30, 2017. (Read more here.)

Brookman: Yes, redress is good. Arguably, the fake Made in the USA manufacturers should have refunded people, or notified and made refunds available. But disgorgement is another tool that is too often ignored by the FTC.

Beggs: I think the point is that simply making the consumers whole once they complain doesn’t have a strong deterrent effect.

Singer: Justin, you also advocate for more detailed disclosures by online platforms, so that outsiders can meaningfully assess Facebook’s practices. I agree that most normal users aren’t going to read, let alone understand, the fine print of an online disclosure. But what it the practical utility in having regulators, academics, and Consumer Reports review the fine print? How do stringent disclosure policies “introduce meaningful accountability” for the behaviors of online platforms? The mechanism isn’t obvious, at least not to me.

Brookman: Sure, at least from my firm’s (Consumer Reports) perspective, it’s pretty straightforward. We’re trying to evaluate and rate products based on privacy and security, and a big piece of that work is looking at companies’ disclosures. But today, they’re often not very helpful! Companies blandly reserve the right to share data with partners, but there’s no specificity on what’s actually going on, and I suspect actual practices are much more constrained — and possibly even good! But because of the way Section 5 works (don’t lie!), the easiest way for companies to get into trouble is to say something wrong. For that reason, they tend to say only vague things that are tough to decipher. Ideally, we wouldn't have to rely on third-party statements, but not everything is testable in our labs (what happens to data in the cloud for example).

Singer: Can you give me an example?

Brookman: Take Google. They collect a ton of data, in a bunch of different ways, but we’re not really sure why and how. What does Google know about its Android users? I’m not really sure. Google’s also said from time to time that they have privacy enhancing technologies in place to ameliorate the privacy issues with certain practices. Google uses Android devices to build out maps based on WiFi triangulation, but they claim not to be maintaining precise location history for users. Or they have a system in place to measure how effective online ads are in translating into offline purchases without tracking individual users. If that’s the case, I’d like to give Google credit for that! But today, Google isn’t actually required to write any of that down, so it's hard to know what is happening under the hood.

Singer: In May 2018, the Europeans recently embraced a form of privacy regulation called General Data Protection Regulation (GDPR). Among other things, GDPR gives consumers the right to transfer their data to another firm; requires companies to define how they keep data secure; and lets regulators levy fines (up to four percent of worldwide annual revenues) if firms break the rules. Opponents of GDPR-type protections warn that, due to larger firms’ ability to absorb regulatory compliance costs, GDRP will solidify the dominant platform’s position. The data here are a bit mixed, as Google’s "website reach" (the number of websites that contain each ad vendor's trackers) is up (by 0.9 percent) but Facebook’s website reach is down (by 6.7 percent). In any event, why would a mandatory change, in say the default settings, impose greater costs on small websites than on larger sites? Are there really economies of scale in complying with privacy regulation? And even if there are, why couldn’t a small firm—that isn’t already exempted from the law—outsource its compliance work to a third-party that could exploit those alleged economies? 

Beggs: I think the overly simplistic logic is that more established firms have higher profit margins. I’m not convinced there is more to the argument than that. If I were to try to make a case for GDPR differentially affecting smaller firms, it would be because there is a fixed cost of compliance. If this fixed cost is large enough, I guess it could push smaller but not larger firms into unprofitable territory.

Chilson: The ability of big companies to absorb compliance costs is only one argument. (And a good one, I think.) Another is that regimes that mandate consent benefit known and big companies because brand recognition might get a consumer over the opt in hump. But it’ll kill a start-up.

Brookman: I don’t get the brand argument. If a company has a better brand and can get someone to agree to something . . . is that not how markets work?

Chilson: It is, exactly, and that benefits known brands.

Beggs: I don’t see how the brand argument is anything above and beyond existing network effects and such though.

Chilson: Brand is a quality signal. Consumers who recognize and respect a brand are more likely to jump through hoops to get a service. New companies, which lack any brand, don’t get that benefit of the doubt.

Brookman: I think that the notion that privacy protections will entrench Google and Facebook is belied by the fact that Google and Facebook have consistently lobbied aggressively against nearly all proposed privacy legislation in both the United States and Europe. I heard similar arguments that adoption of Do Not Track would favor those companies; again, however, both fought hard to stop industry adherence to that standard. As a result, Google and Facebook (and the vast majority of the ad tech industry) ignore users’ browser Do Not Track signals to this day. Certainly, if a company’s business model is predicated entirely on bad privacy practices, then privacy legislation will especially impact them, and will probably disadvantage them more compared to companies like Google and Facebook. Both companies have problematic practices that should be addressed by privacy rules, but both also have core products that can be monetized effectively without compromising user privacy. However, because those companies’ business models are also heavily reliant on the use of personal information, privacy law does impact them directly—and more than most companies. The Federal Trade Commission has brought actions against both companies for privacy violations, though due to weaknesses in the law and the limitations in its own authority, its actions have not sufficiently deterred their abuses.

Chilson: I think Justin’s argument confuses absolute and relative losses.  A regulation can be bad for Google but worse for competitors.

Brookman: As far as relative losses, yes if someone’s business is just hidden, third-party tracking, yes, it will be worse for them! And it should be.

Beggs: I do think there are a decent number of apps where tracking is basically their entire business model, but it’s not the product they provide to consumers.

Brookman: If someone is really fine with Google tracking them all over the web, and Google is not coercing that consent based on factors where they’re insulated from competition, then I’m fine with Google getting that consent. But they’re not today.

Singer: GDPR makes exceptions for smaller firms. And because GDPR fines are tied to a company’s revenues, European regulators in search of cash (or headlines) will likely target the larger platforms. Indeed, according to The Economist, Facebook “had already started to feel the force of the GDPR, which went into effect in May. Last month Vera Jourova, the European Union’s commissioner for justice and consumers, warned that it needed to amend its ‘misleading’ terms of service to make clearer how it uses personal data—or face sanctions.” Setting aside the issue of compliance costs, which might hit smaller (non-exempted) firms harder, don’t we expect most of the enforcement action to be aimed at big platforms such as Facebook and Google?

Chilson: Hal, Hal! Certainly we can expect most of GDPR enforcement to be targeted at big firms. But I don't think many VC's are going to buy the argument, "Invest in this small company, don't worry, the EU Data Protection Authorities won't sue you." Check out this VC funding study, which I’m sure Justin hates!

Brookman: Do not even get me started on the VC argument! And that VC study funded by Google and Facebook that ignores the post-enactment data that shows no dip in funding and looks instead at just four months where investment also dips in US? Yeah sure.

Singer: Jodi, How would an economist go about measuring the benefits and costs and GDPR with the benefit of hindsight—namely, six months of implementation? How can we tell if it was a good thing on net? Put differently, how would we know the Europeans have succeeded in solving the privacy problem? What kind of data should economists be seeking? What the hell is the counterfactual?

Beggs: The costs are easier to quantify in that they are largely the costs of compliance and whatever lost profits there are as a result of changing business models. The benefit side is far harder to quantify since it depends crucially on how much consumers enjoy having the agency and/or privacy that the GDPR affords them. One thing I think about a lot is whether anyone sees the GDPR notices and is like “Nope, I’m out” and stops using a site. If this proportion is really low, then I’m not convinced that the GDPR really changes anything.

Chilson: There have been tons of companies that just stopped serving EU IP addresses.

Beggs: I would definitely put that in the cost column then. To the extent that the GDPR didn’t actually force companies to stop collecting information and instead mainly mandated disclosure, you could proxy for consumer benefit by looking at how many people refused the GDPR agreement. But like I said, I’m guessing this proportion is small.

Chilson: The thing about information is, like many resources, it has different value depending on who holds it. And my argument earlier was that the info Justin seems most worried about, the tracking information that enables targeted advertising, has zero (or near zero) value to the individual consumer.

Brookman: But the value in preserving seclusion—the value in being left alone and hiding personal information from others—is not zero. That’s the point.

Beggs: To make an analogy, let’s say I was paying Hal $5 a month but didn't realize it. Then he is forced to inform me that this transfer is happening for some reason and I have the ability to stop the payments.  If I’m then like, “Nah whatever it's fine, just let them continue,” then the disclosure hasn’t really benefited me directly. If, on the other hand, I’m like, “Wait, no! Stop the payments!” then you can try to estimate a benefit, though even then that benefit might not be the full $5 per month. Justin’s point about indirect benefits is valid though. I just don't have a good way to put a number on it.

Brookman: Right to be forgotten! (just kidding)

Beggs: MOAR TRANSPARENCY!!

Chilson: Demanding seclusion while interacting with machines on the other side of the planet is as misguided as demanding to be secluded while walking down Fifth Avenue.

Brookman: Real transparency would be nice.

Beggs: Ooh I like that analogy. I say “Demanding seclusion while making a purchase in the middle of Bloomingdale’s” is a better analogy, but the point is the same.

Brookman: Sure, I can't demand seclusion from Bloomingdale’s. But maybe I shouldn’t expect that Bloomingdale's will share with Acxiom (the data broker). And I shouldn't expect that Facebook and Google drones in the store will be monitoring my purchases either (even though they have the technical ability to do so).

Beggs: I was thinking more of the hedge funds counting the cars in the mall parking lots, since neither the customer nor Bloomingdale’s is consenting to that. Also Bloomingdale’s absolutely shares retail data with IRI and other retail data providers and has for basically forever. Some grocery stores even have a direct feed of their scanner data to these companies, so I have some bad news for you about data collection in a historical context.

Brookman: Sure, the hedge funds (and my Facebook/Google drones). And reasonable limits on what Bloomingdale's does (with IRI or anyone else) too. That’s what the CCPA is designed to address.

Beggs: Companies collect retail data and sell it, and somehow this has never sparked outrage before as far as I know. If you buy a CD or iTunes download, it’s reported to Nielsen SoundScan, etc., along with some information about you.

Brookman: It never sparked outrage because people probably don't know about it!

Beggs: My point is that data has been collected for forever, so, to paraphrase Legally Blonde, Why now? Why this data?  Or perhaps Why is this data different from all other data?

Singer: Why do eat unleavened bread on this day? Pretty sure the answer is in the Haggadah.

Brookman: Data’s been collected forever, but it’s being collected a lot more now. The Supreme Court has been confronting this issue in the Fourth Amendment context. Technologically, we rely more and more on third-party services and everything about it is technologically observable. But the Justices are rebuffing Neil’s argument, saying “Well sure, even if our data is out there, we still should enjoy some semblance of autonomy, and just because data is observabledoes not mean there should be a free-for-all with no individual rights.” I think that’s absolutely right, but we need to translate those concepts to the commercial privacy space.

Chilson: There is a big, big difference between my rights vis-a-vis the government and my rights vis-a-vis another private party.

Singer: Ok, here’s is a question for the whole panel: What parts of GDPR, if any, should be incorporated into a U.S. federal privacy law? And what parts of GDPR, if any, should be jettisoned?

Chilson: I think most of the worthwhile things that GDPR does, the U.S. approach already does, but in a different way and with a lot less baggage. I wouldn’t import much. MAYBE some limited data portability stuff.

Beggs: Take the transparency part but do it in a less BS way.

Brookman: I mean, many of the basic concepts in there are pretty standard suggestions for privacy law—more meaningful control/data minimization, better transparency, access and portability, security, etc. I would probably write the requirements slightly differently though. One criticism of GDPR is that it (in practice at least) favors process over substance. Effective privacy law should not simply mandate expensive processes and compliance programs. And core practices have not materially changed as a result of GDPR. Emailing everyone privacy policies and flashing a cookie banner on every page is not Mission Accomplished.

Singer: Some very important people on Twitter, such as Open Market’s Matt Stoller, claim that the problem with the FTC isn’t lack of tools, but instead, a lack of desire to enforce the law in general. What guarantee is there that the FTC would reach into its newly designed toolkit and go after a firm such as Facebook? Is there a way to design the new rules such that enforcement levels wouldn’t change given a change in administration? Franklin Foer has called for the creation of a new agency, the Data Protection Authority, to secure the sanctity of privacy. Others have called for an independent tribunal to adjudicate privacy disputes between users and websites, where users would have a new private right of action. Must all paths to privacy protection run through the FTC?

Chilson: No, not all paths to privacy need to run through the FTC. In fact, they don’t now.  My paper lays out the many tools for privacy protection, from technology to social norms to private contract to torts to state AGs and sectoral laws enforced by other agencies. As to Stoller’s argument about under-enforcement, the U.S. has and has long had the most active privacy enforcement in the world.

Brookman: While I think the FTC could be more aggressive (e.g., demanding more remedies in Made in the USA cases, I think there’s more they could do with FB order), I think they’re by and large doing a good job with very limited resources and legal authority.

Singer: What is holding up privacy legislation today? Some reporters, like David Dayen, suggest that Senator Schumer is too close to Facebook (Schumer’s daughter works there). Where do the Republicans stand on this issue? And how do we get through the roadblock? Would wrapping privacy into a larger bill—say, one that empowered the FTC to police discriminatory conduct by tech platforms—enhance or detract from its likelihood of legislative success? (Sorry, I couldn’t resist a plug for the Net Tribunal.)

Chilson: I think what’s holding it up is the extreme difference in visions for what the world should look like. We have a good example here. Justin appears to think large chunks of Facebook’s business model (the parts that help it compete with Google) should be illegal. But many people like the free services funded by that business model.

Beggs: One simple thing holding up legislation from both a practical and political perspective is that the costs seem far more quantifiable than the benefits.

Singer: That’s a good insight, Jodi. It’s hard to value the removal of a risk like a privacy violation. That could explain the failure to tackle lots of issues, including climate change.

Chilson: I think there are harms that many people can agree should not be permitted. If legislation focused on those areas, it would have a chance.  And I think most of those are more appropriately labeled “data security” rather than privacy.

Brookman: Any federal legislation is hard to pass these days, all the more so with a divided Congress in 2019. I think people have been hesitant to put rules in place for big tech companies, but that sentiment has radically shifted in the last couple of years as some of their warts have become visible to all. And some legislation is inevitable within the next 5-10 years because “increasingly, all data collection is technologically possible” point I made earlier. If we can’t win the technology war, we need legal tools/rights to turn off the panopticon.

Chilson: That insight about more data collection becoming technologically possible mirrors my paper!

Brookman: That’s great, though we just need to come to agreement about what the tools/rights are. I’m not holding my breath on self-regulation and soft norms!

Singer: Closing arguments? Rapid-fire round until I blow the whistle.

Chilson: I just don’t understand walking the public streets of the Internet and demanding that no one watch.

Brookman: The Internet isn’t a public street.

Chilson: It ain’t a private residence.

Brookman: If I’m on ESPN, I don’t expect that everyone else is hanging out there watching me!

Beggs: I feel like that is still an open question from a philosophical perspective, that’s kind of the issue.

Chilson: The question is whether that expectation is informed or uninformed.

Brookman: I think if you suggested to the average user that everything they did online should be observable by everyone else in the world, they might balk. They should balk!

Chilson: I’m not suggesting that it should be, I stating that it IS.

Brookman: But it’s not! It’s becoming less so, but it’s still the case that I have some degree of anonymity online. What's the Woody Hartzog formulation . . . Privacy by obscurity.

Beggs: I have some bad news for you about grocery shopping. I’m totally going to follow Justin around the grocery store to make a point.

Brookman: You can follow me around the H Street Whole Foods tomorrow around 11. But it’s hard to do at scale. Maybe in the future Amazon will make an API available. But I'm interested in staving off that dystopia.

Beggs: It’s a date!

Chilson: Thanks Justin and Jodi! See you at Whole Foods.

Singer: You know we’re recording this entire convo for the interwebs?

Beggs: Wait you didn’t tell me we didn’t have privacy here. Hold up.

Singer: Sorry.

Beggs: Oops.

Singer: Stalker.