Archive for the 'Society' Category

Moving away from algorithmic curation

Monday, April 24th, 2023

This was originally posted as a tweet thread back in November 2019, which is why it starts off with some suggestions about how best to use Twitter that are irrelevant now, since Twitter was killed by a dipshit billionaire with more money than sense and it took out the third-party clients in its death throes. But the rest of the thread holds up and I felt it worth resurrecting.

  • Tired: Helping Twitter refine its algorithmic profiling.
  • Wired: Switching to reverse-chronological timeline, as persistently as Twitter makes necessary.
  • Inspired (but admittedly not available to all): Switching to a good third-party client like Tweetbot or Twitterrific.

A few weeks ago, I saw a tweet from someone who’d switched to the algorithmic timeline experimentally and saw absolutely nothing about a then-current major news event that folks they followed had been tweeting about.

I still think about that.

It increasingly seems to me that the best things you can do with these services—recommendation engines, algorithmic timelines, and such—is (1) don’t use them when you can help it, and (2) lie to them at every opportunity.

Poison the well, and don’t drink from it.

I say this because we need to re-learn how to find each other, to recommend things ourselves, and to try each other’s personally-offered recommendations.

These are things that we should not give up to the control of companies, nor any other unknowable, unaccountable entity.

This also comes out of my thoughts about Twitter itself. And the degree to which social media has replaced RSS as our means of receiving fresh content.

It’s been good in some ways. Some of us have learned a lot, met new folks.

But we can’t depend on this.

We can’t depend on getting more of what we’ve expressed we want, if the algorithmic timeline can override that.

We can’t depend on discovering new things (and good ones, not bad ones) because the algorithm is unaccountable, built on profiling, and only seeking engagement.

A discovery algorithm’s job is to introduce people to things they don’t know they want or need.

How do you do this without introducing them to fascism, outrage fuel, shock content, or other trash? Without humans seeing that shit to screen it out?

How do you do this ethically?

Assuming the answer is “you can’t”, we then need to take up the mantle ourselves.

Spread positive things. Things you’ve made. Things you’ve learned. Skills, ideas, thoughts, actions.

This must include anti-fascism, the only other alternative being silent neutrality.

And we’ll need to use social media as best we can as long as we can, because of its amplifying nature, but we must also re-learn the other ways, the older ways. Online and off.

The old ways still work.

Print still works.

Person-to-person still works.

It’s gonna be hard to break dependency on social media, because of network effects and because of the addictive nature of it.

We probably need to start DMing each other email addresses, for a start.

And regularly contacting each other, Christmas-card style.

We’re going to need to make some changes in order to not keep heading down the same directions we’re currently going.

Not just “we” in the first person but “we” as in society. What “we” in the first person do must be chosen with that goal in mind.

I do hope, though, that whatever we ultimately replace social media with, it still has cats.

Cutting way back on Twitter

Sunday, July 19th, 2020

The person who invented the word “doomscrolling” deserves a Nobel Prize for Literature.

Twitter, at least the way I’d been using it up to now, which I think is coincidentally in broad alignment with how Twitter wants to be used, is two things at once: Both a social network, and a news site.

Its news function, particularly when viewed through a third-party client not subject to Twitter’s injection of hot garbage into the web client’s timeline, consists of curating one’s news intake by following people who retweet the news you want to see. Some, but not necessarily all, of these people may be your actual friends; at any rate, the two almost inevitably mix, as your friends may retweet what you have retweeted and vice versa.

Thus, the place where you hang out with (at least internet) friends becomes the place where you learn what’s going on in the world, and vice versa. Whereupon you can no longer have one without the other—at least, not without ditching Twitter and replacing it with something without that mixing.

Friday morning, I woke up in a peaceful, energized mood, and proceeded to begin my day, like every day before it for the past decade-plus, by reading Twitter.

After half an hour of that, I was too depressed to do anything but hold down my couch for most of the day. Well, that and read more Twitter.

I did basically nothing all of Friday. Like most days before it.

My greatest achievement that Friday was noticing the clear contrast between my mood upon waking up and my mood after looking at even a little Twitter. Whereupon I made a resolution.

Since Saturday, I have hardly looked at Twitter at all. When I have done so, it has been with a Tweetbot filter turned on that blocks all retweets, so I only see original tweets by the people I follow.

The filter helps a little bit, but the mixing is unavoidable; when people aren’t retweeting, they’re quote-tweeting, or subtweeting, or commenting, or venting.

And so my Twitter exposure lasts for only a few minutes at a time now. There isn’t as much there without taking an unfiltered look at the fresh horrors device, and I know what happens when I do that.

As the world (and the United States in particular) has been descending into fascism and climate crisis and long-overdue reckoning with a lot of things that a lot of us have ignored or accepted for far too long, the news on my timeline has gotten more and more captivating in the wrong ways.

This isn’t a new problem; it’s been true for years, even before the 2016 election. But it has gotten worse over time.

Often, I would spend most of the day reading Twitter. Lately, when I had my fill of Twitter—or ran out by hitting the top of the timeline—I would simply lie on my couch, unable to do anything.

How did I spend that time of doing nothing? Thinking. Thinking about the problems in the world; thinking about how I, or we, might solve them.

Not actually doing anything in such a direction. Just thinking about it.

One of my catchphrases is “awareness is not a substitute for action”.

Twitter makes it possible to be very aware of things that we do need people to be aware of. But it is possible to be too aware; to fall into “staying aware” by doomscrolling, or “raising awareness” by retweeting, QTing, etc., and never get around to actually doing anything.

We must be aware, but we must also do something about it.

In the past two days, as I have looked at Twitter for maybe a total of two hours (as compared to my previous daily average of… most of the day), I have had so much more energy to do things.

Much of this has been long-neglected tasks around the home. I’ve done two loads of laundry; put some things away; made some mask pieces. I’ve also made progress in reading a couple of books.

But also, I’ve had more energy for the political activism that has been my focus for a few years now. I can do things more than I’ve been able to do for a long time.

Looking away from Twitter is an immediate remedy, but creates a longer-term problem.

Twitter had been my main news source. “How do you stay so well-informed”, people would ask me, and I’d apologetically tell them that I spend way too much time reading a well-curated and completely irreplicable Twitter timeline, and that I absolutely do not recommend trying to get your news the same way I do.

(Which is completely, seriously true, and not just for doomscrolling reasons. There’s a lot of bullshit out there, from the factually untrue to the misleading to the technically-factual-but-incendiary-instead-of-actionable, which I have experience spotting and rejecting/avoiding and a lot of people do not. Getting your news from social media is highly inadvisable without it.)

Awareness is not a substitute for action, but action requires some awareness in order to not just be dancing in a void. Action is often reaction, which means knowing what’s happened to react to.

I have ideas. Maybe I’ll check NPR’s front page on some schedule. Maybe I’ll check Google News. Maybe I’ll just have to trust that people I work with who have stayed plugged into the news, or the occasional glance at my (retweet-free) Twitter timeline, will fulfill my need for some awareness without overloading or depressing me into inaction.

One way or another, in the meantime, I’m happy to be doing stuff.

I’ve been pretty quiet on Twitter since I stopped reading it, but I may actually start posting more, as I start doing more things worth posting about. Masks and other projects; things I’ve made; some political actions; nail polish.

I have long been intentional in what I tweet and retweet, especially about the sociopolitical. I try to keep things actionable, not contribute to people’s rivers of mood slime. I also try to uplift wins, to highlight the achievability thereof. That will likely not change, though I will probably be retweeting much less simply because I won’t be coming across things I would retweet in my timeline.

So you may see more tweets, and will definitely see fewer retweets. I’ll still be reading my mentions and occasionally replying to tweets I do see. I don’t think I’m leaving Twitter entirely, at least not yet—but my approach to it has already changed and I like the change a lot.

I’m looking forward to seeing how this goes.

Awareness is not a substitute for action

Sunday, July 19th, 2020

There’s a pernicious, though well-meaning, sort of mindset around “raising awareness” of various evils in the world.

The idea—such as it is—is that it’s vitally important to raise as much awareness as possible of whichever evil, and important as an individual to “stay informed”, which is to say, tuned into some daily dosage of news coverage.

I have a couple of problems with this.

First off, moment-to-moment news coverage, such as you typically see on TV/radio, on most news websites and newspapers, and indeed in all major news outlets regardless of medium, is extremely bad for actually being informed.

Moment-to-moment (often, but not necessarily, “breaking”) news coverage only tells you what just happened. This is insufficient in three ways:

  • what: but without much delving into who (or who else, or who didn’t) or when (or when else, or when it didn’t)
  • just: only the most recent event/act, with little to no history
  • happened: agency may be subtracted or actively denied (language such as “officer-involved shooting”, as well as more generally reporting on events rather than acts)

Truly informative coverage provides history, context, and depth—the sort of information that requires time and work and knowledge to assemble into a coherent and informative story.

Moreover, “staying informed” is only beneficial as a means to an end.

The opening cutscene to “Watch_Dogs 2” bothers me particularly because of this. The player character describes a system of corporate-administered mass surveillance—a system whose name is emblazoned on numerous hackable in-game objects, at least for the player’s convenience but seemingly diegetically as well—and concludes that the hacker group he belongs to, DedSec, needs to expose this system, its nature, and its ramifications to the world.

The problem I have is: That’s a start.

Playing that game in 2019*, I felt like there would’ve been at least four news articles about the fictional system in question already. One in WaPo, one in ProPublica, two in Reason magazine, and assorted reblogs and other coverage on Boing Boing, HuffPo, and various other sites. Plus innumerable tweets and Facebook posts.

And yet the system persists.

It persists because everyone is vaguely aware of it already—it’s directly involved in their lives; part of the problem is that they functionally can’t escape it—and they have accepted it.

Dig into the ramifications of such a system—enabling discrimination; enabling unwanted disclosure/privacy violation; etc.—and most people will go “Wow. Sure hope that never happens to me.”.

We are trained—mostly by news media themselves, passively, as a side effect of how stories are selected, reported, packaged, and delivered—to regard news coverage as spectacle. That thing happened, over there, apart from you, apart from your life. It happened without you, and so you have no influence upon it.

I haven’t played far enough into “Watch_Dogs 2”’s main plot to know whether this spoils a twist or not, but I feel like if it happened in real life (arguably it has; the game’s reflections of its inspirations are not subtle), the consequence to DedSec blowing the lid off the story and revealing ctOS’s true nature to the world would be a worldwide collective shrug.

Not just because that particular story is about a computer system and most people’s eyes immediately glaze over when you start talking computer shit, but because it is a part of their lives already, and such an exposé, with the implication that the system being exposed is bad, in turn implies that a part of their lives is bad.

For each individual person, the response to this is as follows:

  • Wow. So what can I, individually, do about it?
  • Well, I need to be in this system to get hired, to pass credit checks, to rent an apartment, etc. So, I can’t opt out of it, even if such a thing were theoretically possible. And I cannot personally destroy it, even though I wish it didn’t exist.
  • So… nothing. I shall do nothing.

So, what did that exposé accomplish?

Here in 2020, look at the mask thing. It’s so hard to get people to understand that even a mask that only protects other people protects everyone when everyone wears them. Americans don’t grasp collective action and the importance of it.

We really need to fix that. We need to start to see ourselves as part of a society, able to take actions that affect more of that society than just ourselves and able to choose actions that improve rather than harm.

We can’t just stop at awareness forever. It’s exhausting to be aware of problems and just watch them happen. We need to choose some subset of the problems and take action, and encourage others to take action along with us.

As long as we do nothing, nothing will continue to be done, and we’ll all be very aware as it happens.

*It’s now 2020 and I still haven’t finished it. I got distracted by side missions and driving around the fictionalized version of the City. And now there’s a pandemic, so “Watch_Dogs 2” has become the Going Outside In The Before Times Simulator.

Putting the “author” in “authoritative”

Wednesday, December 25th, 2019

(This isn’t a particularly cheerful or hopeful or actionable post. It is a rumination on society, and one of its present negative trajectories.)

We’ve had a reasonable debate over the right to be forgotten. The next one will be about the right to lie. Not the right to lie in court or as part of some fraud, but the right to everyday lies and white lies. Digital surveillance deprives [us] of this important part of life.

For whatever reason, I might be ashamed or shy about my age/looks/past/job/health/sexual preferences/race/ethnicity/beliefs/political views/abilities/education/family history etc. It’s valid to lie about such in everyday life. Tracking and ML should not interfere with that right.

John Wilander, 2019-12-24

We often treat records, such as government records, as authoritative or definitive—reflecting, even to the point of creating, reality or truth. Want to know something? No need to ask the person(s) involved; just look up the record.

This practice and the mindset underneath it is commonplace for a wide range of records, including anodyne ones such as birth certificates and loan records.

Now, as anyone who’s had to correct a birth certificate or clean up a fraud-riddled credit report can attest, treating a record as definitively equal to truth can present some pretty tall barriers to fixing a discrepancy between the two. It involves asserting that there is a truth not created/confirmed by the record, demonstrating that the record is wrong, and convincing the authority who maintains that record to revise it to match reality.

Surveillance isn’t just the act of watching someone, or everyone. Indeed, in the modern era, mass surveillance doesn’t involve any individual person or persons watching anyone at all—it’s the bulk collection of phone call records, voter registrations, addresses from credit card purchases, all this data that is created whenever we interact with any kind of system. Hoover it up, save it somewhere—and you have a surveillance system. A system that creates records of what it observes.

As with classical surveillance, there is a problem here of “but what if they get it wrong?”: Records erroneously merged, data entry errors, people named “Null” who break poorly-implemented checks and comparisons. In modern mass surveillance, rather than there being a human surveillant misinterpreting your intent or mischaracterizing your actions, it is the bare facts of your life and your actions that get misrecorded. Less “surveillant thinks you’re having an affair” and more “surveillant thinks you have more children than you have” or “surveillant got your birthdate wrong”.

But the problem is not merely the accuracy of the record. It is the authoritativeness of the record. It is treating the record as an infallible determinant of truth, rather than a fallible artifact of an observation.

When we try to automatically verify someone’s identity using whatever scraps of information they’ve given us, or to let them board a plane with their face, we treat the data we have on someone as being necessarily, implicitly the same as their actual truth. We assume/trust/bet that the data we have matches the truth; that they are the same as each other, and therefore the record can tell us the truth.

When we make this bold leap into the concrete wall of bad assumptions, we create the sort of dystopia in which you can’t sign up for Hold Mail because “identity verification” just mysteriously fails. We create systems that reject objective reality and substitute their own.

We create systems that seize authorship over reality, and over our own personal truths.

That is an even greater crime of surveillance, even more than knowing too much about you/everyone, or than the risk of there being inaccuracies in the record (both of which are also severe problems).

When Wilander talks, in the tweets I quoted, about the freedom to lie about yourself, Wilander is talking about authorship of your reality. Authority over your reality.

That is: self-determination.

The freedom to lie about yourself is the freedom to tell the truth about yourself. It is the same freedom, the freedom to tell your story as you see fit, and as much as you see fit, and as accurately as you see fit. It is your exercise of your power to determine yourself, and present yourself, and determine how (and whether) to present yourself.

It is your ownership of your truth.

A system that creates records, definitive records, that other subsystems and other systems treat as the truth, and that those other systems query directly without asking us, seizes ownership over our truths away from us.

That’s dangerous enough without introducing malice into such a system. We see that now, when the system gets it wrong, when the record is inaccurate, when we have to spend our precious time and energy trying to find a real, live human being like us who (a) will believe the system got it wrong, (b) has the power to fix the record, and (c) will do so.

But when we think of surveillance, when we warn of surveillance, we are already thinking about malice—the state (or corporations, or both) acting against us.

No wonder that some of us are scared of a mass-surveillance society that has the power to write our reality without us, and confirm that (maybe-parallel) reality behind our backs, and maybe turn that reality into consequences for us ranging from inconvenient (can’t sign up for Hold Mail) to hostile (false arrest). It’s bad enough when it’s trying to help us but not always succeeding; it would be a true dystopia if (or when) it is turned truly against us.

The only record that cannot leak—or in any other way be used against you—is one that is not kept, is not recorded. But this fact becomes irrelevant in the face of malice; a system that chooses to write its records regardless of your reality, or to override your reality with the contents of its records, does not care about recording truth; it has assumed the role of defining truth, and your own truth outside of that system ceases to matter.

To guard against that is to resist surveillance in all forms, malicious and not.

Back in the present, our system of mass-surveillance/data-brokerage/(whatever facet you want to look at) is one that promises convenience. It promises to enable its users, its querents, to learn (or verify) information about a subject without their involvement (which implies without their consent). It promises to enable the construction of other systems, automated themselves, to fulfill the function of querent, to ask the questions about us that the record-keeping system promises to be able to answer.

It promises to obsolete us.

You are no more than a record to be verified and/or updated and expanded. You are not a customer, who has wants and needs and a real life that the record may or may not match; you are not an employee interacting with that customer; you are not a manager responsible for any of this. You are a card in a Rolodex and you do not hold the Sharpie.

The automation of so much of this—of identity verification, credit checks, checking out at the grocery, checking in at the airport—is, I think, part and parcel with the rise of mass surveillance. The more data we assemble on everyone, the more we can automate. And the automated systems feed data back, and contribute more.

I think the opposite of mass surveillance is also the opposite of automation. It is a focus on people, real people in the really-real world, as human beings with lives who are not the same as, nor defined by, a record. It is the awareness that the map is not the territory and the record is not the person. It is the recognition that we cannot automate our society away, because our society is us; it is made of us, by us, for us, and giving all of it over to automated systems means leaving none of it for ourselves.

That said… I have no idea how we’re gonna get there.

But I think lying to surveillance systems might have to be part of the short-term effort.

P.S.: I should say, I’m not 100% anti-automation. I think there are things that could be automated in ways that make us all better off. But we’re gonna have to be suspicious, and ask hard questions about whether any automation technology liberates us, or pushes us out of our own society—and, whenever possible, how we can ensure the former and not the latter.

P.P.S.: The day after I posted this, former Googler (fired for organizing) Laurence Berland wrote a thread, commenting on excerpts from a WaPo article, about a student-surveillance system for universities named “SpotterEDU”. One of the (many) problems with the system has been erroneous reports of lateness or absence—and the school taking the surveillance system’s side.

Of note, the mindset behind the development and deployment of the SpotterEDU system appears to be adversarial: assuming that students will flake out, lie about their attendance, make bogus excuses, etc. unless their movements are tracked at all times. That is to say, the system makes its (fallible, not-always-accurate) records of students’ locations without regard for the truth of the students’ accounts, because the system positions itself as the sole determiner of truth.