Data Driven Society

(soft upbeat music) – We’re at a crucial
moment when we can actually decide what to do with AI
in a deep meaningful way. (soft upbeat music) – The achievements that we make with AI are very often not so
much about the technology but, about what the people do with it. And I think that probably speaks a lot to where this industry is at, that we probably have access
to the technology that we need but, it’s the people that we
need to bring on our journey. (soft upbeat music) – I think this community this possibility to process that in a positive way. If you manage to do so,
by the foundation now, it’s going to make a lot of
difference for the world. (soft upbeat music) – And it’s not really
about the technology, the technology uncovers
the way that humans think. It amplifies what we
do for good or for ill, and it makes us confront
over and over again the imperfections in our own culture and our own behavior
because the AI can make that more powerful for good, in connecting us. Or for bad, in terms of dividing us. I would like to introduce
our second key-note speaker, Surya Mattu, who is a
brooklyn based artist, engineer, and investigative journalist. This is perfect timing in
the arc of our discussions because what Surya does, is focus on ways in which algorithmic systems perpetuate systemic biases in
inequalities in society. Alright?
So, here’s a new technology. How does it make us reflect even harder about our power to do good or ill, so that we thoughtfully apply it? Surya is a research scientist at The Center for Civic Media,
at the MIT media lab, and a resident at Eyebeam. When he was a data reporter at Gizmodo’s special project desk, and a contributing
researcher at Propublica, he worked on a project
called Machine Bias. A series that aims to highlight how algorithmic systems can be biased and discriminated against people. That was a Pulitzer Prize finalist for explanatory journalism. So, please join me in welcoming Surya, as he helps us dive into the implications of the power of this technology. (group applauding) – So, hi! (laughs) I’m Surya. Thank you very much for
coming and listening to me. I’m feeling quite kind
of embarrassed right now, because you pretty much covered all the stuff I’m gonna say. (laughter) So, you could just leave
and kind of go check out the museum, and you’d have got pretty much the same amount of information. But, since you’re here,
I might as well like try to make this at least entertaining. So, I’ll just start with like a background of who I am and what I do. So, yes, artist, engineer, and at least a accidental journalist because I never really
meant to be a journalist. I kinda just fell into it. And the way I fell into
it was I was always interested in understanding how algorithmic systems affect society, and the way my research in that started actually came from
being a systems engineer working for a microchip manufacturer. And I used to write software, and I worked for healthcare services, and I, so I was very interested in how like the protocols
we write at like, the chip level actually
can influence how like hospitals are using technology. And that’s kind of where a
lot of my work started from. But, since then I’ve kind of moved forward and been thinking about it
in kind of broader context. And, this is actually what I think of as the framing of the research I do. And I call it Adversarial Research because the way I like to think about it is that the goal of the research that like I’m interested in carrying out isn’t to understand how technology works, but, to specifically
focus on who it harms. Because I think one of the challenges when you’re thinking about these systems is we think about it from the
perspective of the narrative, of the person who’s made it. And, of course, they can’t
think of all the consequences of their system, right? So like there’s always apologetic tone. They’re like, oh well
it’s kind of like racist but, they didn’t mean
to make it like that. It just like happened. And you know, so complicated and hard that it is not their fault. Which is fair and honest but, the problem is it kind of demeans and kind of reduces the harm
that’s actually being caused. So, it kind of separating those out is something that I think is helpful in your like mental model, when you’re trying to approach a system. As I come at this, be
like, what are all the ways in which it’s bad? And it’s not like a
reflection on the person, it’s not saying that person is bad, but, it’s like the system is causing these kinds of problems. And I think that separation
is what we kind of lose in a lot of the conversations
around this stuff because you’re focused so
much on its complexity. But, do you know what? Bridges are complex. Planes are complex. We go on planes everyday
and we don’t have the same kind of fear that we do when
we’re using a computer, right? And you’re literally in a
can of steel in the air, like (laughter) and that’s less terrifying to us (laughs) than using a computer. So, an example of that
as was already mentioned was the story I worked on with Julie Angel and Jeff Larson a couple of years ago, it’s called Machine Bias. And it basically talks about
how the risk assessment tools being used in the
criminal justice system. We actually came to
Broward County in Florida, thank you Sunshine Laws, for making that data access more open. To do the study we basically found that these risk assessment
tools that were being used are racist and the way
in which they’re racist is that they’re twice as likely to call an African
American at a higher risk of committing a crime
again versus a Caucasian. And back to like the– Oh, I guess I’ll just
get into it in a second. So, before I can dive in more, I wanna talk– and we kind of already discussed this but it’s always like, I
like to start like this when talking about AI. Which is, what are we
not talking about, right? So, we’re not talking
about sentience, right. We’re not talking about this. We’re not talking about the systems that can think for themselves and have like feelings and
emotions, because they don’t. They’re spread sheets, they’re
oppressive spreadsheets but, they are spreadsheets, right (laughs) We’re not talking about this, right. We’re not talking, do people
know what I mean by this? This is minority report, these
are like pre-crime right, pre-crime doesn’t look like that, it looks like this, right? And that’s the thing that we
lose in these conversations. It’s not predicting that
kind of future, right. It’s not the inevitability
that’s put on it. That’s the narrative spin. It’s not actually what,
it’s always probabilistic and it’s always coming from like a place with human assumption and bias. So, even when you talk about pre-crime, the question to ask there for me always is what do you define as crime, right? Like, is white collar crime, crime? Well, if it is then we can really make– and someone’s built this
right, someone’s built like a white collar crime prediction app that you can walk around a city, like you walk around a
place like New York City and the builted training
data said in a model, that tells you if you’re
in the neighborhood, how likely you are that white collar crime has occurred there. So, you can like give yourself– and it just kind of gets to the assumption of what we mean by crime,
when we talk about crime. Right, so, when talking
about new intelligences what we’re really talking
about is predictive and prescriptive models
as we learned yesterday. The ability to make an informed guess about what might happen in the future based on what happened in the past. The ability to predict something from data is not new, what is new is the scale at which we can make these predictions which has been made possible by the ever increasing power of computation and the volume of data gathered. So, that brings me to the next question. Which is, when we are talking about AI what are we talking about, right? We focus a lot on the algorithms
that kind of govern this but, the actual ecosystem of AI is much bigger than that, and when you start thinking of AI not as just the algorithmic
systems that run them, but, the entire thing you kind of see where a lot of the biases lie and where a lot of the intelligence is. And a lot of the intelligence actually isn’t intelligent, it’s just money power enabled, right. Which are very old and un-innovative ways to think about things. So, I would argue that AI is
made up a vast power-hungry, interconnected network
of computers and humans. And I think that people
who really do a good job of describing this in detail are Kate Crawford and Vladan Joler in this really in-depth investigation they did into the Amazon Echo. The URL is AI now is a group that
hasn’t been mentioned yet, I really recommend them
and the work they’re doing. They’re really thinking very meaningfully and deeply about social
implications of AI. So, I just wanted to give them a plug. So, yeah, Kate and Vladan basically went down on a deep dive of
understanding the Amazon Echo, and basically, the entire pipeline that goes into, hey Alexa, play music. Like what are all the pieces that move when that’s happening? So, what have I got here?
Yeah. So, when they describe the
ecosystem of the Amazon Echo they tried to understand like, what all the components
are and how they move. So, the place you really have to start is the manufacturing, right? It’s the raw materials. So, as much as we like to say that AI at scale is made
possible by cloud computing, the reality is that The Cloud is actually a bunch of computers humming in a data center. It’s easy to forget
that The Cloud is built on materials that are
extracted from the Earth. The rarer among them coming
from some of the most conflict prone mining areas. I’m not going to dive into
the details of all of that, but, it is worth remembering always, that when you train
models and you’re on AI that there is a severe
environmental impact to that stuff, right? Like these things don’t just happen kind of abstractly in the distance, something, energy is being
used for these systems. Don’t get me started on blotching. (laughter) That’s like a whole other conversation. So yeah, so we start
with the raw materials, and the elements, and the extractions, that kind of go through from them, and then we get into the
physical infrastructure, right? So like, these are the
document, the cables, the communication equipment, the computers, that
actually run the internet. And again, like many other things, The Cloud works on technology that has its origins in
military applications. Things such as, flight pattern detection, weather prediction, and
early warning systems for incoming missiles. These are the foundations of
all the technology we use. And it’s just worth remembering that when we are using these systems because some of these assumptions are so deeply embedded in these systems. Like, we confuse them and
conflate them with reality. And if you can actually think of, and I’ll get into this
a little more detailed, but, if you can think beyond
just what already exists or what we want to exist, we can start seeing where intentions and edges lie. And the next thing, which I
think gets often like neglected in the conversations around AI is the actual label that
goes into that, right? These systems are not that smart. Humans have gone and labeled data. There are people in the third world who have to sit and like
look at dick pics everyday and decide whether something is a dick pic or not because computers aren’t good at figuring that out yet, right? But we, so we think about AI, when AI is actually just a bunch of human label that has been extracted
and compartmentalized into a way that a machine
can replicate it, right? So when we say we’re like kind of we have to be worried about like AI, it’s not AI, it’s the
people who are trying to not make you have a job. Those are different things. AI can actually help
you do your job better. There’s nothing in it that prevents you from doing that, it’s just like, who is the system actually working for? That for me is the key question. Yeah so things like content moderation and like even like image classification, these are all things that are possible now because this work has been
happening for a long time, and I actually think this work from Andrew Norman Wilson
highlights this really well. I’m just going to play
two minutes of this video just to give you a sense
of what he was doing. – [Narrator] In September 2007, I was hired jointly by
Transvideo Studios and Google. Both headquartered in
Mountain View, California. Transvideo had a contract with Google and took care of 100% of
their video production in Mountain View, and sometimes elsewhere. My labor was sold to Google in the form of a nine to five job. I had access to a personally
unprecedented amount of privileges, but was not
entitled to the ski trips, Disneyland adventures, stock options, and holiday cash bonuses, from their team of
temporary Santa Clauses. Thousands of people with red badges, such as, me, my team, and
most other contractors, worked amongst thousands of people with white badges as full-time Googlers. Interns are given green badges, however, a fourth class exists at Google that involves strictly data entry labor. Or, more appropriately,
the labor of digitizing. These workers are identifiable
by their yellow badges and they go by the team name, Scanocks. They scan books, page by
page, for Google Book Search. The workers wearing yellow
badges are not allowed any of the privileges that I was allowed. Ride the Google bikes, take the Google luxury limo shuttles home, eat free gourmet Google meals, attend authors at Google talks, and receive free signed
copies of the authors books, or set foot anywhere else on campus except for the building they work in. They also are not given
backpacks, mobile devices, thumb drives, or any chance
for social interaction with any other Google employees. Most Google employees don’t know about the yellow badge class. Their building, 3.1459, was next to mine, and I used to see them leave every day at precisely 2:15 p.m.,
like a bell just rang. – So, you get the idea, right? So, this is in 2007. Which is when Google still said that they, what is it? Be no evil, don’t do evil? They still said that. And this was happening at the time. So, the assumptions that
these things are happening because of a bunch of clever white dudes is not actually true, there’s like a hidden layer of like labor that goes
into enabling these ideas, and we often don’t get
to hear about that labor because it’s just not that interesting. It doesn’t make us
compelling of an argument about why things are happening. And not to take away from the people who are doing the really
interesting, clever work, I think that’s important, too. But, it’s important to remember that everything goes together, and it’s not like one of these
things is not more valuable, it’s what we put value into, and what we value as society. So, anyways, moving on from labor, going back to this. Does anyone here know what this is? The ENIAC, it’s one of the first digital computers that were ever made, and it was used in the early ’40s for a variety of things. Basically, all around prediction, and the main applications were thermonuclear weapon simulations, the Manhattan Project, generating
artillery firing tables. Which is kind of where the roots of cyber nettings come from. Basically, the idea being
like that if you can build a simulation of how– if a plane’s going in the sky like this, and you want to shoot
something like to hit it, you need to know how
you need to aim your gun so that when it gets here
you’ll be able to get there, and that’s all stuff you
can do mathematically, and they build these like
tables of how you’re supposed to position the angles of your gun so that when you see the
velocity, you can measure it, and it’s too hard to calculate so, they make these tables to do that. So, that was one of the
original applications for computing, and the third
was weather prediction. So, these are like the original
uses of big data, right? And if you think about these uses, what’s kind of really nice about them, is like they don’t involve any humans. They’re like natural systems
that work kind of work on their own accord and they’re fairly easy to predict with a consistent error rate because there’s no like human
interaction in this ecosystem. So yeah, the ability to
harness data for prediction has touched many aspects
of human life now. However, in order to use
data to make a prediction, it is necessary to
reduce our complex world into a simpler model. Something a computer can understand. While this framework makes a lot of sense when you’re trying to predict
the trajectory of a plane, how much uranium you need
to make a nuclear bomb, or whether you need an umbrella when you leave the house or not, there’s a radically different influence when using it in a more social context that does include humans. So, for example, let’s compare
the predictive analysis in the weather to something
like healthcare, right? And in 2015, John Hancock was an insurance company
that came out saying– insurance company that also offered corporate wellness programs. That came out saying that they would like, to, didn’t want to disrupt the industry by providing discounts for insurance premiums
and corporate benefits, to people who were willing
to wear a fitness tracker. So, the logic was really simple, share your data, and
get insurance discounts. In response to this,
me and my collaborated, eager brain started
thinking about the logic and the assumptions
embedded within this model. So, we started thinking about
who these systems left out. Because if you really think about it, a Fitbit is a simple heart-rate
sensor and an accelerometer. An accelerometer is a device that is good at measuring acceleration. That’s all it can do, right? So, it can’t measure this
but, it can measure this. And the reason it measures
that, instead of this, is because that is a
better proxy for motion. So, when we say you’re
living an active lifestyle, it’s because your fitness tracker, which measures acceleration, can map that. So, we started thinking about this, and we made this art project on Fitbits, and I’ll just play the video,
it really explains it well. – [Narrator] Qualifying
for insurance discounts? Are you unable to afford
active adventure holidays? Or, maybe you just want to keep your personal data private
without having to pay higher premiums for the privilege. Our team of researchers have devised a simple range of
techniques that will allow you to free your fitness
data from your self. (cheerful music) (power drill revving) (clicking) Free your fitness. Free yourself. Free your fitness from yourself. – Right, so this is back in 2015 and we thought we were pretty funny, and you could argue, as some
actually did on the internet, that our subversive art
project was dishonest, and that, what did it say? And that lying to your
fitness tracker is deceptive, and should be considered illegal, right? There are all these like end of potential ramifications around this. But, when you consider the fact that most studies about
these devices say at best, that they don’t significantly
improve our lives, and say at worst, that
they are wildly inaccurate. It becomes clearer who these systems are actually working for. Flash forward to 2018, and
John Hancock has doubled down and said that they are
moving towards exclusively selling life insurance that is based on tracking you. What this does so elegantly is conflate what an
accelerometer can measure with a person’s health. And the people that this will
penalize the most are those who aren’t able to keep the sensor happy. Think of the taxi driver
who works long shifts and who will be penalized
for not going to the gym, or the person without legs, or just an elderly person who has been advised by their doctors to not strain themselves. All of the sudden, they will
all be penalized by default for not being able to
conform to this system. Reducing the complexity of
health and wellness to something that can be tracked by a sensor clearly has more benefit to
those who can use that data than those who generate it. And like with most forms
of technical innovation, the people this will hurt the most are the most marginalized among us. So, the insurances’ made
available through fitness tracking surveillance is most commonly used by the people who can’t afford that. And that does this thing of perpetuating a form of mass surveillance
on the poorest among us. Which is always been happening, right? So, AI is not doing something that new. It’s kind of doing the same stuff in just a new kind of flavor unless we kind of really think about it. So these systems are
difficult to hold to account because we just have no access, right? The opaqueness of the algorithms, the companies that run them. It makes it hard to ask
meaningful questions about how they work from
outside these companies. And when issues are made public, the most common ways these
things are dealt with are one of three, and I wanna
go through them right now. The first being the
easiest, self regulation. I like this example from Amazon that came out earlier this
year, like a month ago, where they basically said that since 2014, they’ve been developing an algorithm that was going to help
them with recruiting. They get so many CVs everyday and resumes, they wanted a system that
basically could tell them hey who are the best people in this pool, who should we hire? Right, the tool, but the
way it turned out was that they said the tool
turned out to be racist. Oh no racist sorry, sexist. The tool disadvantaged candidates who went to certain women’s colleges, presumably not attended by existing male engineers at Amazon. It similarly downgraded
resumes that included the word women’s as in women’s rugby team and it privileged resumes
with the kind of verbs that men tend to use like
executed and captured. So now this kind of gets to
that point in the conversation around like was the algorithm racist, or is the algorithm sexist or is Amazon inherently a place where men
are more comfortable working? Right, and this is where,
this is for me like one of the most dangerous
parts of this conversation and nuance is we conflate
what’s actually of value, that’s kind of within our system to something that the
algorithm is just doing, it’s not our fault, we
don’t think like that. So anyway, so self-regulation
is the first one, first of the techniques. The second is a classic denial, right? Of course we don’t do anything like that, of course there’s no way
in which we’re providing these kinds of harms. And then that always inevitably
leads to the third kind. Which is an apology, (laughing) right? So a common thing you hear from people who work in the technology industry is that, that other than
themselves are not racist. They’re merely learning the prejudices already present in society. It is true that the harms often come as unintended consequences. Which is reasonable given the
sheer scale of these systems and how unprecedented they are. Yeah scale of the system. The challenge then is to determine who’s accountable for them and never before has it been so easy to separate intention from accountability. That’s kind of really one of
the cruxes of the challenges we have right now. The reality is, that unless
otherwise, unless, unless, unless, we’ve state expressive
values in our technology, they will always imbue
the values of the ethics of the culture that they are created in. Like we can’t code our way out of racism and we certainly can’t
make the situation better if we’re not listening
and talking to the people most affected by it. In order to ensure our
technology doesn’t disenfranchise marginalized communities, we
need to listen to the people from these communities and
build from their experience. Technology is never neutral, it just reflects the
views of those in power, even the one’s that
aren’t explicitly stated. And just again to bring this point home, I wanna walk you through an example of a project I did when I was at Gizmodo with my colleague Kashmir Hill. So we were super interested in the people uou may know
algorithm that Facebook has. Does everyone know what the people you may know algorithm is, like the way they suggest friends to you? So at Gizmodo, Kash and
I were really interested in understanding how this algorithm works. Kash had already done previous
reporting that showed how psychologists were being
suggested their patients and we wanted to investigate that further. Since Facebook doesn’t have an EPI or a way for you to access this data, I built us a tool that
allowed us to collect all the people who were
being suggested over time. There was no other way to
actually get access to this data. And we didn’t really
find it that interesting. There was only one person
really from Kashs’ history who was interesting and it was this woman named Rebecca Porto. I always have to read
this because it’s so wild that Facebook figured this out. Rebecca Porta is Kashmir’s
great aunt by marriage. She’s married to Kashs’ biological grandfather’s brother. Her biological grandfather is
a man Kashmir has never met. With a last Porta who abandoned her father when he was a baby. Her father was adopted by a
man, who’s last name is Hill, which is Kashmir’s last name, and he didn’t know about
his biological father until adulthood. Facebook suggested this person
to Kash, right? (laughs) I don’t even know, like,
it still blows my mind that like Facebook had
figured that out, right? And yeah Facebook was able to figure out Kashmir’s family tree
better than she could. And we wrote the story and we
talked about this experience and we then after editing the story, got all these tips and
emails from our readers talking about the different ways in which they’re harmed by this algorithm. And the people it turns out, the communities that are
most harmed by this algorithm are sex workers and the LGBTQ community. So we heard stories from sex workers who went to great lengths to not let their sex worker clients to know about their personal identity, but somehow they were still showing up in their Facebook friends suggestions. Right, so without even
having, they didn’t know what was happening. They tried using different phones, they tried a bunch of stuff, but there was no way for
them to opt out of this. So when we were writing the story, we asked Facebook, hey can
you, can you like tell us how someone who’s says their
having this problem what can they do about it? And Facebook said “Oh yeah
you just go to your settings, “you go to this dropdown
menu and you just say, “show my thing is available
by no-one, blah blah blah “and you should be totally fine” So we wrote that in the story and almost immediately we got responses from our readers saying “Hey
I don’t have that option” And it turns out that the
reality is there is no way to turn that off. It turns out the opt out
is the only available for public figures. So if you have Facebook,
if you use Facebook, there is no way for you to
completely block other users from sending you requests. We wrote this story in like January. When did this story come out? (audience mumbles) Yes so November last
year, almost a year ago, and this is still the case, right? We told Facebook about this, we’ve told them about the harms. I even read FTC complaints
from people saying that this is happening and they still don’t have
the option of no one. So anyway we got to
have this great headline for the next story we wrote, where Facebook basically
said “Oh yeah turns out “we were wrong, we
don’t have that setting, “we used to have that setting.(laughing) “It was like an AB
testing thing, you know, “it’s kinda complicated, you don’t know” And like this really
reflects another big problem with this industry is
the separation between PR and engineering. And for me the most terrifying thing is that they’re gas lighting themselves before they gas light
us, right? (laughing) Like that’s, that’s kind of the problem. It’s like we don’t know
what the problems are because they don’t think there a problems and we don’t have access to, we can’t ask meaningful questions and that’s kind of how
this whole thing starts. So we started like
really digging into this and we heard about this crazy
concept of shadow profiles. Does anyone here know
what a shadow profile is? So for those who don’t know,
what a shadow profile is. Let’s say this person
A, person B, person C. Person A and person C are on Facebook. Person A and person C have
person B’s phone number in their contacts list. Facebook now knows about person B, because person B exists in
both their contact lists. Person B might not want to be on Facebook, might not want (audience
member mumbles question) Not sure yet, because we can’t prove it. No it’s also just Facebook messenger, so when you make a Facebook account, they say “Hey connect all your friends” (audience member mumbles) But if you share your contacts
with Facebook even once they have it. – They have it? – Yep. So that’s all they need, right. And that’s how they do it,
so they build shadow profiles of people in your contact list, even if they’re not on Facebook, right? So the issue of consent is
not even considered here. So Facebook knows, they
say they have a graph of 2 billion people but that’s the
one that they claim to have. We don’t even know who they don’t have. Who they have who’s not on Facebook, but who doesn’t want to be on Facebook. And when you start reading
through their patterns which as an obsessive person I have. You start understanding the
mechanics behind this right? It’s not that, it’s again it’s
not that they’re super evil they just wanna connect
you to everyone, you know, it’s like, why what could
go wrong right? (laughing) People like people we should all hang out, it’s totally great. So anyhow, I ended up, what we
ended up doing in this story was we built a tool that
let, and again this is like a concrete example for me, of like how, you can investigate an algorithmic system without caring how it works, right? So we don’t care how it works, we really just care who it harms. I still don’t know how exactly
Facebook suggests friends to people but we do know it harms members from the sex worker communities
and the LGBTQ people. And they still haven’t changed that. So we built this tool that
basically allowed people to collect their own data. Because there was no easy
way for them to access it. This is not even considered,
even in like GDPR, if you went through
that whole rabbit hole, this wouldn’t come up. This is not in Facebook’s mind your data. So I built a tool that let us do that and obviously they were
not happy about that. And they said it violated
their terms of service and please stop and this
week we released this tool right before the Cambridge
Analytica stuff broke out. So I think they were then
distracted (laughing) and they like let it go
and didn’t send us the cease and desist. But it does highlight
the fact that Facebook doesn’t want you to collect
your own data, right? So they say that this is all for us but if you actually start
performatively trying to violate their terms of service, as
some of us do ’cause it’s fun. You start understanding
what the boundaries are of what, how these companies work, right? And I just wanna make this point again because I think it’s so crucial. Facebook’s focus was on
building the social graph. The one thing they’ve
always proudly proclaimed is that they are in the business of connecting people to each other. It would be naive to believe
that no one there considered that not all people necessarily
want to be connected to each other, right? Sexual harassers, their victims
don’t want to be connected to them and they want an
explicit way for them to not, to be able to prevent that. And not having an option to say no one, betrays for me the intention right, like that’s the bread and
butter of their business model. The social graph is the one
asset that they actually have. By not adding the no one,
it’s actually just telling us, hey it’s more important for us that we keep connecting people together and not give them the
option to not connect, than care for these communities. That’s what I take away from it. Of course we can get into a
whole conversation of intention they don’t really mean it, it’s
a really complicated thing. But that’s what I see every time I see that drop-down menu right? Every time I see that on
my Facebook page I am like you know this now, we’ve told you, there’s like a bunch of
kind of writing around this. And you still don’t care. And that’s where the
issue of agency comes up and the lack of agency we have as people. So just to like drive this point home and we’ve made it a lot yesterday, but I wanna make it with
the best possible gift. Humans drive technology. It does not live, it does
not live, it takes a second, it takes a second you know,
a lot going on (laughing) But really humans do drive technology, it’s not something that
happens in a vacuum right? And if you’re not
explicit about the values that we’re putting into our technology, it is just going to imbue the
values of the culture it’s in. It’s not that people at Facebook don’t care about sex workers, it’s that they don’t have any around them. So no one’s telling them that
hey this harms me in this way and when we do tell them
they take on a defensive role rather than saying, “yep
we have totally messed up “and don’t know how” And that’s the gas lighting
that I’m personally really concerned about. Alright, last point and then I’ll stop sorry I’ve been very ranty. The final point I wanna
make is about value. And since the invention of the smart phone we seem to be working with the assumption that everything will be
better if it’s smart. We now have an entire industry dedicated to making this a reality. I’m talking of course about
the internet of things. Earlier this year my colleague Kash and I, were interested in learning
about what the ambient emissions of a digital smart phone
would reveal about us. So what I ended up doing
was Kash bought like 18 different like smart devices in her home and I built her a special
router that basically just monitored all the network
activity as it went through. So the question we were
really asking ourselves when we started was, let me step back. The FCC recently had said that ISP’s internet service
providers can sell all the data that’s coming through their networks. And we were interested in
seeing what the ISP could see from my smart home and more importantly what they could now sell. So we basically built her a special router which was the equivalent of seeing what her smart devices
were revealing about her. So this kind of like,
so we were like one step before the ISP but it’s
kind of the same thing. So we ran this experiment for two months and found that there wasn’t
a single minute of silence. Amazon Echo spoke to it’s
servers every three minutes and that was just like baseline stuff this is kind of the graph data, but the things I could
infer from just looking at the data sitting in New York, she’s in San Francisco, was I knew when they woke
up and when they went to bed because they use a smart coffee machine and it will ping every time
that they would use it. I knew when they brushed her, when Kash brushed her teeth because she used a smart
toothbrush that pinged every time she was brushing her teeth, actually was this amazing
moment and then we were on slag and we were in a meeting and I was like, “I know you haven’t brushed
your teeth yet” (laughing) Sent her like a DM. And she was like, “yeah”
she’s a good one. (laughing) I knew when they were
putting their baby to bed because they played a
lullaby on their Echo to play and then lullaby went to
a particular domain for it was like
or something. I also knew what they
watch on their smart TV because again the smart, like
the internet of things realm now is like the insecure
version of smart phones and laptops right? They don’t, like there’s terrible
security on these devices they leak all our information. So I knew for example that
they were fans of the shows Difficult People and Partied Out, because their smart TV let me know. I even knew when they watched them right? Netflix was a little better,
it secured most of it’s content but it didn’t secure it’s assets. So I knew what shows they were watching or what they were sifting through because there were these images which were kind of going unsecure over the net over. There was actually like a
project idea in figuring out what things have new episodes. This is a good AI application. You can make a lot of money, which I’ll share with all of you. If you build an AI that
figured out what new episode which images had new episodes, you could figure out
what TV shows they watch. And then you can put them in buckets of what type of viewing they’re doing. You can do this anywhere you use Netflix, this is all insecure. Project idea. If you make money of this,
please give me some of it. (laughing) Yeah so I knew that, I knew that they like Coffee and Conversation with Comedians or whatever that show’s called. And all of this stuff right? This is all just the stuff
that’s kind of coming out of us without us even thinking about it. And the kind of the beauty
of the story was though, that not only did these devices
leak all this information about them they were
infuriating for them to use. They had such a tough
time like getting their smart coffee machine to
work with their relaxer, and they had to stay in the
freeze and the very specific way and it didn’t always
work and the wifi got reset all the lights went off, and there was like all
this like crazy stuff and we say these things
are making our lives easier but they’re not, they’re just changing what
parts of them are hard. And I think that’s the thing
to like think about right? And for me the expectation of
user data for corporate gain is something we can no longer question as citizens on the internet. However we are beginning to
see the harmful ramifications of having a world driven
by opaque algorithms run by monopolies that are not
held account in the public. This has led to a from
of digital colonialism where you have to ask, who
is the true beneficiary of your smart home? You or the company mining you? And the reason I bring up
colonialism is ’cause I think it’s actually the best model
to describe what we have and if you go to Wikipedia like
a good researcher that I am. And look at how they define colonialism they say ” Colonialism is the
policy of a foreign polity “seeking to extend or retain
its authority over other “people or territories. “Generally with the aim of
developing or exploiting them “to the benefit of the
colonizing country and helping “the colonies modernize, in
terms defined by the colonizers “especially in economic,
religion and health” right? If you think of what data
driven systems are doing they are redefining our societies in ways that we have no control and agency over. And without our consent
because we’ve been told that they’re better, right? We’re just being told , that this is the better way for us to exist. And when you really gonna get
into it our digital emissions really do can and do represent who we are if they’re
shown in the right light. And this is a project I did in 2014. And this kind of like, really
got started in this world was looking at wifi and understanding how much we leak through our smart devices. So, if anyone’s ever had
the experience of going home and having their phone automatically connect to their wifi network. The reason that’s happening
is back in the day, your phone would basically
broadcast every network it had ever connected to. In order to connect to them
better so you had seamless wifi but in the process you could make these like digital profiles of people. So what I was basically
doing was like collecting all the networks people’s
phones were like broadcasting and making their wifi projects.. So I would like frame them
and I would give it to them and be like “Hey I
figured out who you are, “through your networks” And when they saw this,
there was this amazing thing that happened, of course this data can be
like exploited from like a demographic perspective right? So like you had Starbucks wifi and they see all the
networks you’ve been to they know what conferences you go to they know whether you
travel business class or not they know like, they know all this stuff based on the wifi networks
you’ve connected to right? But more than that when
I send and show this to the people who make these
networks, they’re like, “Oh this, I forgot I’d even been there” “I forgot I had the soy” This one in particular
there’s a network named called Oompa Live Oompato and when
I show this to the person who’s portrait this was, they like “Oh my god, that’s a weird
sound my niece used to make “when she was three years old
and my brother and his wife “decided to make that
the name of the network. “I completely forgot about that” And so this is actually a
part of her personal story in a way that like she doesn’t know about and isn’t even told it’s possible and that she shouldn’t care
about and should feel bad about the companies that
are taking it away from, they taking that
information away from her. So for me and what’s really
exciting with the space we’re in right now, so
this is all garbage right? The tech industry’s like already done. They’ve already like formalized
a lot of these systems what’s cool about museums is that, there’s a space to
re-think these assumptions. And I think the
conversation we were having in the data and privacy thing
and trying to figure out what values we’re talking about. Let’s just start with, not this, right? And I just started like,
let’s try to not do this and provide, because I don’t
think there’s a better way to fight this, that isn’t
changing the narrative. Trying to fight back is not
going to be as effective as just showing a different
way in which things can work. Because it turns out people
don’t really have models of that because we don’t have
access to that information. So I think the last thing
I wanna just say is that the simplest place to travel
really be to treat the data of the person in the same dignity that you would keep that person. Thank you. (clapping) (gentle music) – PR is a way that knowledge is additive that if we’re using it
collectively in a beyond the need that you have right now, but to a need that you can’t even imagine. One thing that AI has
the opportunity to do is to bring more together, to do more and I loved that so many of us
were thinking of it that way. – And I think that’s the
one thing that we can protect in this industry is to make sure when we’re talking about AI, our geniuses and our
innovators and our creators know that that part is
sacred to us in this industry and it’s protected from the machine and to know in our world
that that is the one last thing that technology can’t
touch is ideas and creation. – I think this community, this
possibility to process that in a positive way, if we manage to do so, by the foundation now, it’s going to make a lot of
difference for the world. (upbeat music)

Author: Kennedi Daugherty

Leave a Reply

Your email address will not be published. Required fields are marked *