Hackers and the Arms Race for Privacy
9:52PM Jul 28, 2020
And welcome back to hope 2020. Our next speaker David's city will be presenting hackers in the arms race for privacy. David has a PhD students of information at the University of Arizona. David, thank you very much for being with us today. We really appreciate you coming and sharing with us.
Thanks. Thanks very much. I'm happy to be here.
So yeah, my talk is hackers in the arms race for privacy. Before I get into it, I wanted to point out that I have a link here on my slides, which includes the outline for much of what I'll be saying, I want to include that in case you are not an auditory person or you'd otherwise benefit from seeing the words. So he pretty close to these words that are shared on this under this PDF here. If you want to go and get it, it's available for everyone. Okay. Cheap personal computers in the 80s and 90s brought computing into private life for the first time.
Altair 80 801 of the earliest personal computers, and in a nice bit of foreshadowing, it had previously been called the little brother. pioneers of personal computing heralded a revolution is familiar, the beginning of a new age now the computer would serve the individual, inverting its reputation as a tool for institutions seeking to reduce
everyone to a number.
It'd be less than 10 years until those themes of empowerment were revisited again as the internet started connecting networks of computers to each other There was a democratization of access to information resources that had previously only been available to powerful in organizations. Now, the individual user was a prolific producer of personal information online. That personal information was increasingly available to a broad range of actors, including both private firms and public governments. Soon Information Privacy came to occupy a central role on the internet. In practical terms, one result of individual users seeking to control their personal information online was the development of personal privacy enhancing technologies are
And those soon
came to be in locked in arms race with technologies for circumventing
As in other cases of one upsmanship, like the humiliating result of sexual selection seen here, the cumulative effect of the privacy arms race was disfiguring to the original vision of
control And the internet.
Were still personal pets haven't really been effective.
arms race have left us with eroding privacy for years, as powerful actors have sought to dominate the space either in the market
or in the military sense of the word.
military perspective, you have General Michael Hayden, who's the former director of the CIA and NSA and a kind of apologist for the intelligence community after the Snowden revelations, articulated the official position that the internet is is not a digital Eden, but Mogadishu. This is Mogadishu
domain to be dominated
the commercial perspective you have companies consolidating users to capitalize on network effects, resulting in few very dominant services against
You can't blame people for having their perspective, of course, with their own sets of goals and priorities, but we've got to have ours as well. The Internet should be a place where it's hard to violate people's privacy, when there's a space To do stuff without it having commercial or political consequences,
can be free of scrutiny for intelligence value or legality
for the innocent.
Central to Internet's promises an idea.
I'm going to argue that it's time to take a long term perspective on privacy to defend those early ideals by focusing on the arms race dynamic itself. from a design perspective. That means thinking less about personal privacy enhancing technologies, which tend to make kind of privacy mall ninjas out of us and towards distributed technologies that are designed to impose costs for trying a privacy attack. And so have more a more systematic effect is to say I think it's time that we start thinking about strategic pets strategic privacy enhancing technologies.
To to pets come in pairs.
There are transparency technologies which are important to ongoing and timely identification. have privacy attacks and the measurement of the attacks severity. And there are what I'll call screw you defenses, which impose costs for attacks rather
than just repelling them.
This is a screw up to a defense is intended to discourage new attacks to weaken privacy attackers against competing approaches that provide close substitutes, but are more private. And the name just comes from this effect in psychology where people agree to be participants in a study, but are reacting rather than compliant to their, to the protocols. And so they try to thwart what they believe to be the purpose of the study.
The well known it's actually a known fact. Excuse me.
So the operators of strategic pets need to have a certain character. The first important trait is independence people, mostly lawyers and policy people who I know
and sometimes say that privacy technology
is a kind of fig leaf against organized efforts by powerful actors. And they conclude from that, that we need privacy regulation instead, that's where our attention should be. But paraphrasing Dan Geer, who's shown here is the former head of in Q tel, which is the venture capital arm of the intelligence community, for attics and optimists about governments and companies ability to create their own work and restrain activities that would otherwise serve their mission might like that kind of regulation approach.
who tend to be pessimists in that respect,
will want to be more direct.
The second trait is bravery. And that's
because operators and developers of strategic pets run the risk of being targeted. And there's a cost associated with that, of course, the tools can be designed without risk in mind, and they should be in where to talk about it. And that may involve some more traditional kinds of pets in particular anonymity networks,
but you know, no matter what Nothing is perfect.
They are resetting and you're good.
Should I go back a little bit or from where it was? A couple sentences. It didn't crash for very long.
Okay, okay. Okay. Okay.
So, to paraphrase Dan Geer, who's shown here is the former head of in Q tel, which is the venture capital arm of the intelligence community apparatchiks and optimists about government. in general to grade their own work and restrain activities that would otherwise serve their mission. They might like the regulation approach. But hackers who tend to be pessimists not respect will want to be more direct.
The second trait, that I think operators and
developers as pets need his bravery,
risk in running spetz of being targeted. And there's a cost associated with that, even if the targets targeting isn't successful. So the tools that we're going to talk about can be designed with that kind with those kinds of risks in mind. And that might involve incorporating some traditional pets in particular, things like anonymity networks, attacks on the sort of smaller group of spat operators
are still going to be a risk.
So to sort of put it in a sentence,
you could say the safest way to practice free speech is still abstinence. That doesn't mean that we shouldn't be doing what we're doing. With strategic bets. The final trade all mentioned is practicality. So all those years ago, Ian Goldberg gave a sequence of sort of surveys of the landscape of privacy technology. They were separated by about five years each. And they all included in in one way or another, the observation that a lot of great interesting privacy research is being done, but only a tiny fraction of it ever becomes practical enough for any kind of adoption.
Incentives aren't really
right for academics to follow through all the way to deployment of their research.
Once it's been developed enough for publication,
adoption should be a central goal for strategic bets from the beginning, with the focus on resourcefulness hacking together using existing tools.
So that's why you j random hacker matter.
These are hacker traits, at least in my own experience, and there's a need for them. So this gives a different form of the kind of public interest technologists Bruce Schneier has pushed for a few years now and others. I think it's a little more important than advising lawyers and senators on technical issues, though that's important too as its role. So this is a kind of call to you, in a sense.
Okay, so let's look at some of these strategic
pets in more detail. As I said, they come in pairs with transparency technologies seeking to identify and measure severity of privacy attack, and screw your defenses. imposing costs
will start with transparency technologies.
Transparency technologies have the character of honeypots sometimes they're the kind of opposite of traditional disclosure, minimization hard pets, in that they are valued is precisely in being attacked. attack is what enables good measurement.
They're also not from you know,
that the fact that they're not hard pets does not mean that they're the best Additional side of the distinction which is soft pets, since those tend to involve a lot of trust assumptions, really their own thing separate from the distinction is traditionally applied to traditional personal pets. So the first category of strategic pet is a traitor tracing
The idea there is basically to
let you attribute oversharing as an attack on privacy to know who it is that's shared, something that you didn't want them to share so widely.
The easiest example is a
service collecting an email address. So here, there's a Facebook captive portal might recognize that if you ever used a pineapple I think this is one of the standard templates that they use on the on the pineapple, but you know, it's also a legitimate login for a captive portal that might be used to access some company some coffee shops website. Sorry, some coffee shops Wi Fi.
So the coffee
shop might take in your email address and then share it accesses. way. Another kind of example might be giving an email address to sign up for two factor and then having it shared.
Those, by the way have already happened. So this isn't some hypothetical.
And there are many
more examples. It's familiar, one transparency responses to provide a fingerprint ID email address in cases like this. So there's already an existing way to do this
might, one way to go is to just
serialize the identity of the party you're giving.
Sorry, you're giving the
address to in the local part of the email. So here we have Joe's coffee stuck there and the first part of the local part of the email, and then use the other half of the local part, which is here FUBAR as a pseudonym that determines where the mail sent to that address gets forwarded. So then receiving email from anyone but Joe's coffee, at that address, which is specialized for Joe's coffee reveals that Joe's coffee sharing.
There are several usable services out there for
doing this including spam gourmands Which is the logo, they're wonderful spam, and 33 mail, you could also do something yourself that's a little more interesting like roll a mail server and have some domain churn using afraid.org. And try and do something to avoid blacklisting and things like that.
The problem with
these centralized services that already exists, will be familiar to anonymity, researchers. compulsion, basically, they're in the position of old school type zero remailers. So they're subject to the same problems as a non pennant phi was way back when, and, you know, less way back when the same problems that rise up and lavabit had faced, which is you can get hit with a court order or threatened in some other way and be forced to reveal the mapping from a student into the forwarding email address, centralized. second challenge
is to proactively
so that's that's the first challenge is to sort of tribute. second challenge is to proactively find services that are performed This kind of attack. So remember, this isn't a personal pet that you're using for your own purposes. You know, so you
can get something you want in a more private
way. This is an auditing tool. So you want to go out and know what the state of things is for that particular service with the web or Wi Fi.
So basically, the challenge is to do fingerprinting, email forwarding
right to drive around a bot to sign up for things provide fingerprinted, but not recognizable email addresses. That is addresses that aren't recognizable,
fingerprinted, and to see what sharing is revealed by what comes in the mail all automatically, and while protecting the forwarding agent from compulsion. So that's the kind of the gauntlet for the strategic version of
this existing stuff.
All right, next one I want to talk
about so this is the role of automation in the bots. The next thing I want to talk about our observational studies. The idea here is to treat a service like a website As a blackbox, not to analyze its internal behavior, but just observe it externally, and to record what it does without intervening or trying to control any variables. So basically in the lower right of this diagram, which is from a paper, actually, which is not a monitoring paper, it's all experimentation, information flow analysis and experimentation, by Michael chance, but we're basically in the lower right, we're not analyzing internal behavior and not exercising any control.
How's our fingerprinting?
The website, if you're not familiar,
that website makes a bunch of requests for values that separately have some legitimate uses. So showing a video at the right size or rendering a font properly, for example.
Together, the requests that are made are unique for the browser or
close enough that a site measuring them on visitors gains this kind of stateless tracking of this identity can take the identity off of them.
So here's an example of
this by Anupam, Dawson friends, shows third parties on a website for listing five websites reuters.com news website. And there's a third party advertiser who's recording very fine grained details from the sensors on a mobile device. So they were interested to see whether this kind of thing this kind of attack was happening to use the sensor values as fingerprinting for fingerprinting, and they found that it was
and listing six similarly,
there's a New Zealand website stuff code NZ, where, again, orientation data is being sent to a third party advertiser. So one response to this kind of thing has been to build bots with an instrumented browser that records requests that are fingerprint like so that's what open wpm does. That's a rather large project. Now I think it's under Mozilla started out, I think Princeton fingerprint j s can be used to sort of audit how well you're doing it avoiding being detected, or being detected as measuring fingerprinting this. So there's a there's a mature tool here, like in the previous case cases.
But the spec challenge
is again, to prevent compulsion for those who are running this kind of bots. So an instrumented browser, for example, for academic research, the incentive to go after authors is pretty low. So academics usually don't do much to protect their anonymity. In cases like this. These authors didn't really, for example,
but for a spet, anonymity is going to be important.
So adapting frameworks like
open wpm to work with better anonymity is an important goal. Also on the theme of practicality and other spec consideration is that these projects need to complete Can you to work? Unlike a research project, once which you know, once it's in the state where it can be published can be, you know, left as a as an artifact. But that maintenance presents some challenges, especially for fingerprinting, where, you know, you have these movements under Google, for example, to give greater capabilities to the browser, is exactly the sort of source of individuating information that you would need for fingerprinting.
sort of additional thing a spec would need in this kind of context is to audit for changes in behavior by the site indicates that it is detected, it's being monitored. Again, academics don't really need to demo it, they only need to demonstrate an issue. So if you miss an edge case, which is some site that has some very sophisticated countermeasures in place, it's not a big deal. But for spet those are the cases that are most interesting and important, though it's really important to be At
the at the forefront there as it were. So if you detect another kind
of arms race here, which I mentioned, you know, since the title, his arms race with fingerprint detector detectors that are then detected by fingerprint detector detector detectors. Right, and I'll get
back to it.
Another kind of slightly sensitive issue, because it's personal for many, and that always brings up the most controversy is the issue of capture of privacy research or projects, the hiring. So I think of this as a kind of new manifestation of the notorious embrace, extend, extinguish idea that came out of Microsoft paused before including this, even though I think it justifies some discussion, because there's lots of people who are, I think, who I respect who are doing things that are that fall under this broad rubric. But I think you know, you
don't come to hope to
mince words with ambition and avoid things that need to be discussed. So thanks We should discuss this. Anyway, on this attack, a company offers a good salary and a role and a privacy related research arm to the leads at some project, like here at Google or here at Facebook.
like their commitment to improving privacy, and how influential their platform is, and what kind of practical real world difference you can make there.
So, what I want to
point out here is that even if the researchers work continues unchanged after receiving funding
of the sort, which I think does happen in
many cases, many privacy researchers are very committed to their work. But even if they are unchanged, the question has to be asked why is funding in this
public way worth it to these companies?
One possibility is that displayed is something I've seen
before which is that researchers are unlikely to produce deployed systems that would impact the privacy attacks on the ground that these companies are so invested in. There's a kind of
between the likely public relations benefit of publicly supporting privacy researchers, especially famous ones, and the risk that they're going to undermine, you know, your business.
So tool wise, the existing
ones are sort of Orthodox tools for scraping such as scrape pie. And those could be used to just
get the names of participating privacy researchers maybe learn about how their research has evolved and do a little analysis of before and after they received funding. But again, I mean, even if it hasn't changed, I think there's a another issue here to be addressed.
So technically, the
sort of same issues for strategic versions of these technologies of the scraping would be what we've seen before essentially, there needs to be some additional anonymity provided and there needs to be some attempt to conceal that a scraper is is operating in order to get real results and And
Okay, the next example of a transparency tool
is one used for compulsion resistance. I've talked about this in different ways. But here the issue is tools that reveal excessive secrecy imposed by legal means. Here, it's sort of a governmental thing more that I have in mind.
So an example is the gag
order that comes with national security letters. So here's an example of one that you see. And those prevents timely disclosure to anyone that the letter has genuine that the letter had been received, except maybe the recipients lawyer and the agents that are directly needed to execute the search and gags don't only
come with national security
letters people sometimes forget there's other legal processes, like Fiske orders, or warrants from the Electronic Communications Privacy Act, which have been historically very important. So the official means of transparency for these kinds of things are
the sort of course
form of release, where there are bins of the smallest bin sizes, 250 orders. And so you can release, which of the bins, you fall in every six months. So you can say zero to 252 51 to
500, and so on.
Oh, that official transparency channel
is still very opaque. And I think you know, the long term privacy perspective stretches into the future. And that's the one I've been where I've been emphasizing but it also stretches backwards. It's worth noting that the secret fisc orders, that's Foreign Intelligence Surveillance Court, the secret court that rules on
a lot of these issues, the secret fisc orders that
authorized collection on many individuals at once, or that left a lot of discretion to agents in determining whether legal standards are being met. Those were the legacy of the Snowden revelations so that that came out and those wouldn't have been caught by These been semiannual reports. So you might say with Dan gear again, we have reason to be pessimistic
about the sort of official channels
if the existing technology here
to build on our warrant canaries so a warrant Canary is a published statement. Sometimes with some verifiable authenticity and indication of timeliness, states that a gag legal process hasn't been received. Once that changes, that legal process has been received with a gag, the Canaria is either removed entirely or timeliness verification is allowed to lapse. So you sometimes say that the canary dies indicates there's a problem. The strategic addition to these existing technologies is to target something more particular and problematic in the practices. So I've given a couple of examples of that already. It's a kind of broader problem to say what those are how to craft a warrant. Canary, with the appropriate sorts of assurances and specificity could be useful for transparency purposes. This is the kind of place where I think technologists should be interacting with policymakers, lawyers and historians of Information Privacy,
to sort of come to an informed idea about what to do.
Another kind of more technical issue
is to manage the Canaries. So Canary washed org, which is the organization with the logo you see here, existed for only a year, it shut down in 2016, citing uncertainty and how to interpret dead canaries. So management of the Canaries to avoid the kinds of problems Canary watch found is a strategic challenge. That's that's, you know, you need to take the burden of
warrant canaries out of the hands of the individual services, beyond maybe just affirming or failing to affirm that they've received process a legal process. This there's plenty of questions here. But I think it's a rich area. And I think it's it's something that could have good strategic effect. Last thing I'll mention before moving on is that is that handle is there's a need to handle further compulsion in the form of legal challenge to the canary two uses of canaries that I've described. So if canaries become more effective as strategic pets, that's going to be something that happens, right? You're going to expect they're going to be more challenges to their use. Right now. They haven't been challenged in court that we know of. But it's an open question how well they'll do. So design wise, it would be prudent to build in some protections to have a distributed design, and to include a variety of legal jurisdictions in the functioning of the of the system. So summary for transparency technologies, strategic pets come in pairs. Transparency technology is the first first half which helps to identify and measure the severity of privacy attacks. The emphasis is on simple automated tools here and elsewhere that do something narrow relatively well and can be fully deployed. It's important to have ongoing and distributed measurement to maintain accountability for privacy attack. That's sort of central to transparency technologies purpose. We talked about creator tracing observational studies and compulsion resistance as areas where you could look to sort of start making some progress on on the in this in this area. Okay, screw you defenses are the other part of Spats which impose some kinds of some kind of cost for attempting a privacy attack. The most obvious screw defense is noise injection. So that's essentially misrepresentation of the attributes that are subject to privacy attack. That it's sometimes takes
the form that
You want to misrepresent in a way that is particularly attractive to a target in order to kind of maximize the wasted resources. It's sort of a sticky, sticky honeypot kind of model.
You can think of this as kind of the
complement to the idea that you never know when you're being watched, which is that you never know when you're being lied to, is weaponized
garbage in, garbage out
to waste resources.
An easy example of this maybe it's familiar is ad nauseum. ad nauseum is a browser extension developed by Daniel Howe and Helen nissenbaum, which clicks randomly on the ads on a page and also optionally blocks them for the viewer and does a few other things.
And the strategic
addition to that, so So obviously, that's problematic from the perspective of predicting a person's
based on their click behavior.
Right, you're introducing a lot of noise.
I still think of that As a promo screw you defense is not quite a full spec yet, because there needs to be some additional automation and targeting of its use. A bot that was guided by transparency pets towards sites that were offending could then access the privacy attackers site and and and preferentially visit those sites and click randomly to reduce the value of tracking on that site or advertising on that site. I just want to pause here and note sorry. So this is the bots point again, I just want to pause here and note that the Daniel how, as an artist, among other things, and that several artists have undertaken projects that I would consider proto spetz and I was reflecting on that the other day, why that might be. And I think it's probably because spetz are a kind of use of free expression to protect privacy. And that's a rich area for exploration for an artist especially. So here I picked out a few that I thought of as good examples. Let's see starting in the upper left you have Daniel how very upper left corner, and then clockwise, we've got Trevor paglen.
The Yes Men, three people whose name
is whose names are all Yanis dancer, that's part of the performance. They all named themselves after a Slovenian politician and then work together. And then
So a different kind of example of injection might focus on email based privacy attacks. So this is again, probably familiar, there's a kind of deceptive tracking that goes on in email, where the email body has links in it whose text doesn't match their target. So
when you click the link,
sent not where you're expected, where you expect to go but to some third party, while carrying a sum payload of your personal information, and the third party then happily takes in the payload And forwards you silently on to your destination, the wiser. There's a company called granicus. That does this for the US government for several agencies and departments. They run links God, I think God is not a god complex thing. But they were Gov. gov delivery before.
And so what you see here
in bold is the is the third party forwarding link there, and then the, the string, there is just some base 64 encoded text, it's not encrypted, totally in the open, anyone can decode it. And so this opens up a clean approach to a spec, which would just combine detection of this technique. It's fairly obvious, with some modification of the data, ideally, in plausible ways, before any link is target is visited. So I did that, you know, I just wrote a little script to do this. It took me you know, five minutes or something, developing a script to do it in a more robust way. Wouldn't be too big of a deal.
But the effect of this kind of thing is
is pernicious, you know, people can't see what, what the true destination of the link is that they're clicking, that opens up a lot of security issues in addition to privacy ones, it's a it's a problematic practice. Okay, so a different sort of screw defense is a little less direct, I want to turn to it now. It kind of follows from the insight that to establish stable conditions that are favorable to privacy. You need services that do something without good privacy to be replaced. You can't have a sort of vacuum, and the replacements should be close substitutes that are more private. So here are the examples that are easiest are centralized services like just to pick to Twitter and CloudFlare. For Twitter, the cases even is the clearest I think so I took that one. There's already plenty of existing technology for mirror bots. Three mirrors of the Snowden account on mastodon off of Twitter. The strategic addition here would be to kind of produce would be to produce a distributed network of mirroring bots that choose accounts based on their follower accounts or some other relevant attributes and then mirror them systematically to a more private alternative, like say mastodon Okay, so I'll just say a few words relating to transparency tools and screw you defenses and then wrap up. First, as mentioned spetz come in pairs. transparency, the first part and that helps to identify privacy attacks and measure their severity score your defenses are then tailored by the transparency half to be proportional unnecessary. severity I want to add can be measured precisely elements sort of added much concrete. Much that's concrete to that claim. But but you can say some precise things about measuring severity Have a privacy attack. For example, in anonymity, there are plenty of metrics for measuring
that sort of thing. You could use
bits of normalized entropy for browser fingerprinting, are you in the sort of database privacy context, you could use epsilon in differential privacy, there are kind of well developed approaches to measurement in those contexts. And then proportionality of response in the screen defense would just be an additional step of measure of coming up with a way to measure the severity of the defense. So how much does a website operator expect to lose an ad revenue if 20,000 clients visit and click randomly on its advertisements regularly for a month, or how many users would adopt another more private service if 10,000 clients mirrored all the privacy and security related content that they could see, or a month so on.
sets are distributed so they're not personal privacy technologies that every user needs to do some other first order task more privately. They're special purpose tools. So one thing that follows from that is we need some kind of a client to coordinate distributed networks, those strategic pets.
And then thirdly spetz use bots, bots.
This is a kind of a, related to the second point,
bots can be used to seek out privacy attacks and to execute screw you defenses on user's behalf. It's really kind of a measure a matter of what's deployable, there are often existing tools out there, I've tried to point them out where I could, that are pretty mature. And that could be used with a few extra protections and a sort of mind toward the strategic approach to create something that that's really practical. So to give just a concrete more result picture of how that might look spet
might follow the model of the kind of at home distributed
computing projects like folding at home was an early one. Right where you wait, you sit with a client As a demon and wait for some resource availability and then do your job when you think perceive as a low
only so instead of folding proteins
or whatever the client can then run measurements of bot blocking, overbroad bot blocking or run as targeted ad clicking bots or update a warrant Canary on as needed, or publish what's learned from other clients in some censorship resistant way. I'm sure you can imagine plenty of ways to do that. So last thing here, great, you might think, Won't this whole thing just recur at a higher level? Aren't there going to be just attacks against strategic bets and we're going to just go flying away again. There will be such attacks. And I think the design of strategic bets should model them thoughtfully. So that includes legal challenges that make a big showy display of going after some random schmuck As to be expected when there's no single operator to be targeted. We've seen this before in cases of file sharing. This was a woman in the US or This chart was forced to pay 1.9 million for 24 pirated songs from kazaam, which is an old peer to peer sharing system. We can anticipate this kind of thing and it should be part of the design from the beginning. bot detection and blocking will likely also escalate as well aided by actors like CloudFlare. But that's I think good ground to fight on. Blocking actually introduces a lot of issues from other areas that make it favorable I think to the bots, it's a little bit like the analog hole problem and intellectual property protection. You sort of asked how could you prevent someone from pulling out a camera and recording what the human eye would see on a display while a movie that's copyright protected plays? Or maybe more appropriately? How could you prevent someone from reading from some late stage buffer right to avoid having to dust off the camcorder? And you can ask a same kind of question about bots. How do you stop a bot that acts as a keyboard and mouse and replace human behaviors without bothering the heck Out of non bought users.
Okay, just to conclude.
I'm looking forward to some discussion. But I want to say, Austin Hill, one of the four founders of the defunct privacy centric tool, zero knowledge system said this a while ago. A lot of people think privacy's like the weather. Everybody talks about it, there's not a lot you can do about it. So the best you can do is build a niche market selling umbrellas.
Well, the point here is that those people are wrong. We can move away from personal
pets which collect umbrellas and monsoons and towards better solutions that are, is with better weather.
To make progress, we need strategic
tools that work. So our focus should be on being resourceful with existing tools that are mature, or that only require some moderate hacking to get working. I'll try to point out a few examples of that broadly. So to end where we began then broadly, privacy is one of the highest ideals of the internet makes possible a lot of what the early pioneers in the area saw as its true potential. Your behavior on the internet is not required by law to faithfully predict your future purchasing behavior or your political beliefs or anything else. You're not like some Zoo animal who has to act naturally in its confines air it is what you do on the internet, who your bots monitor, who how your represent the local part of your email address, and so on as a part of expression, and that can be harnessed strategically like all expression of in the words of our esteemed colleague, Emmanuel Goldstein, most of the power remains in our hands and in our minds. I think hacker hacker should
Yeah, happy to take questions now. If there are any. Thank you very much.
All right. Thank you very much, David. That was an excellent presentation.
I really enjoyed a lot of it. We do have some questions from the audience. First, we're gonna go right into them. So the first question I have for you is, are there considerations for spetz by jurisdiction? Or should we be thinking about their application in a more broad context?
Yeah, that's a good question. I mean, I think it's an advantage for strategic pets that you can write distributed approaches are kind of naturally paired with strategic pets. And distributed approaches can be run across legal jurisdictions. So you can sort of build into the design of these systems, a way to escape the grips of a particular legal regime. So I think, I guess I think the answer is we should take a more global approach in the design of the spec, but I think the the sort of threat modeling should be informed by What kinds of things local jurisdictions are trying to do? Because there's a lot of variation. And you know, that might even extend, you know, with it's not doesn't have to be International. Right. I mean, there definitely is a lot of variation among the states in the United States that can make a difference. So, for example, for transparency and ccpa, in California. So it's a good question. I think the answer is sort of think locally, act globally, something like that. Right. Think about the threats, look for local and, and then come up with a approach that circumvents them.
That's right. They say let things start local. Sure.
All right. I have another question for
you. And we've got about nine minutes. Next question for you is you also mentioned nissenbaum, and she feels about the issues of spent in a contextual interview Tegrity framework good mentioned that in your talk.
Yeah. So I don't, I don't not relying on any particular model of privacy. So this question is referring to a kind of philosophical framework for understanding privacy. I think that that's valuable in the sense that it forces us to discuss some of the some of the forces discuss privacy kind of in an expansive way. It's a little counterintuitive, but I think sometimes when you try to define things, you end up with a feeling for its diversity that's lost otherwise, if you're just very focused on the practical
but I would also say that,
yeah, I would also say that
they're really this talk is really focused on kind of rubber hits the road technology, you know that the idea is that we should be aiming for a little bit lower hanging fruit that can be deployed and it can be used by at least some decent number of Members of a community. And so very theoretical work can sometimes distract from that in some sense. So I tried to not put too fine a point on it. I don't think anything should turn on how you understand privacy
from the perspective of nice and bounds.
Got it? Alright, then our next
you know, should people
run their own mail server to protect their privacy? You know, it's very hard to compete with free mail, but you know, you give up your privacy.
That's interesting. There's so one place where this came up in the in the presentation is that is in creating a forwarding service that that allows for fingerprinting in one form or another. And so I mentioned you could use could run your own mail server that does this. And do something like, you know, domain churn with freely available domains automate the process. So that's a benefit to running your own mail server. That's a kind of strategic reason to run your own mail server.
But, um, you know, and I guess, to the extent that you're
forcing actors in this vast interoperable system, to take seriously that there may be individuals running their own mail servers, that's also a benefit, right? You don't you can't just blacklist email domain domains that aren't familiar enough to you that aren't big enough players, which is a problem that currently kind of haunts this, this question.
So I guess I would say it's, it's a good
it is a maintenance task. You know, there's a lot of things to consider with respect to You know, security attacks on your system maintenance of your system. So I guess it depends on how much you're interested in the process. But from a strategic perspective, I think there's a reason to do it. Yeah, I do think there's a reason. Thank you. That's a excellent tonight. I definitely agree
with you. One of the things that you mentioned, for your privacy that I think is very interesting is using browsers in a VM sandbox. So that, you know, they're all self contained. It's like your own personal spet.
Yeah, well, I mean, so that kind of approach that's really
the personal approach to privacy that involves things like isolation that you get from a VM, for example, you know, to varying strengths depending on how you run things. That's really really a matter of personal pets. So that's that's should be distinguished, I think from this kind of
network that is designed
to root out and counter attack privacy attacks. So, I guess to the extent that you find evidence that there are, you know, exploits in abroad, like an extension, let's say that is hoovering up lots of your behavior on the web and sending it off to some third party. That, you know, to the extent that you detect that with a transparency tool, that's then reason to target a screw you defense to the company, to try and think of ways to, you know, visit their site and ruin their ad revenue to right to sign up with sibyls, to their listservs to write all these kinds of approaches that you can imagine that impose a cost for this attempt to
Exactly. All right, I running out of time. So
I have one more question for you. And the question was about back in the day when privacy first start getting some traction, people were sending their emails with prison words for that bulk data collection on his prison. What's your opinion on that whole show? how that went? Yeah, that's, that's a really
I'm glad that came up to there was an early project doing that called flagger, which is now hard, sort of hard to find. I think there's a repository for it somewhere, you could dig up. And it was subsequently taken up more, I guess, prominently by Ben grosser, who is one of those artists that I mentioned before. And I think it's a pretty clear example of a kind of a proto spec. It's not a it's a In a way it sort of has both dimensions because the transparency side came out as a result of the Snowden revelations. And so there was some attempt to say, Okay, well, we can take this list of watchwords that was released by the DHS, and try and append to your to the content of your email. Some long string in the case of Ben grosser, it was not just appending the words but it was actually this kind of hmm generated string of, of grammatical but nonsensical language. And so you'd get these emails, you know, grandma coming, coming to tea later. And then at the bottom, it would say, you know, bring the anthrax to the, you know, to the drop point and writing lots and lots of sinister sounding stuff. And I think that drama, what I like about that case is it dramatizes the problem with running this. You know, that's an artistic project. That's amazing, right? Because everyone requires At the very at the idea of running such a thing, right? You target you put a big blinking light on your head. It's a kind of privacy protest. And it's a clear message, but it's a high cost. And it's not clear how effective it is other than the indirect means of effecting maybe policy and to some extent public opinion. But if you want to have a direct effect, if you want to design systems that use that expression more directly, you need you need to have those those same approaches leveraged in a way that hides who's responsible, that protects them from retribution and systematic in how its effect is, is
All right, well, we're out of time now. Thank you very much. David city, PhD students at the University of Arizona. Thank you very much for your presentation on privacy and hacking our privacy. And I really appreciate you coming here on behalf of all the attendees and all the volunteers and everybody here, I hope 2020 Thank you very much. Great. Thanks so much.