tbyfield on Sat, 30 Mar 2019 17:10:27 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: <nettime> rage against the machine


On 29 Mar 2019, at 6:32, William Waites wrote:

It seems to me it is a question of where you draw the system boundary. If the system is an aeroplane that is flying, then the recording device is not part of the control loop and it is not a cybernetic tool in that context. If the system is the one that adjusts and optimises designs according to successes and failures, then the recording device definitely is part of the control loop and
it is a cybernetic tool.
This is where 'classical' cybernetics drew the line. Second-order 
cybernetics, which came later (late '60s through the mid/late '70s) and 
focused on the 'observing systems' rather than the 'observed systems,' 
drew that line differently. I don't have a solid enough grasp of the 
work of people like Heinz von Foerster and Gordon Pask to say with any 
certainty how and where they'd draw it, but in general their approach 
was more discursive and less, in a word, macho. So they'd be less 
interested in the isolated 'technical' performance of a single plane or 
a single flight and more interested in how people made sense of those 
technical systems — for example, through the larger regulatory 
framework that Scot spoke of: regular reviews of the data generated and 
recorded during every flight. Scot's note was a helpful reminder that 
the purpose of a black box is just to duplicate and store a subset of 
flight data in case every other source of info is destroyed. In that 
view, it doesn't matter so much that the black box itself is input-only, 
because it's just one component in a tangle of dynamic systems — 
involving humans and machines — that 'optimize' the flight at every 
level, from immediate micro-decisions by the flight staff to 
after-the-fact macro-analyses by the corporation, its vendors, 
regulatory agencies, etc. The only reason we hear about (or even know 
of) black boxes is that they fit neatly into larger cultural narratives 
that rely on 'events' — i.e., crashes. But we don't hear about these 
countless other devices and procedures when things go right. Instead, 
they just 'work' and disappear into the mysterious 'system.'
(As a side note, this brings us back to why Felix's overview of how 
different regimes contend with complexity is so stunning — 
'complexity' is a product of specific forms of human activity, not some 
mysterious natural force:
	https://nettime.org/Lists-Archives/nettime-l-1903/msg00127.html

His message reminds me very much of what I love about Marshall Sahlins's work and, in a different way, of Moishe Postone's _Time, Labor, and Social Domination_: basically, 'complexity' is immanent.)
But back to my point: Morlock's original take about the Boeing 737 
crashes and how this thread unfolded, or at least to one of the areas 
where Brian and I seemed to part ways. It's easy to lose sight of the 
larger dimensions and implications of these human–machine assemblages. 
For example, media coverage very quickly focuses on detailed specialist 
subjects, like the design of the MCAS system that's failed on 737s; 
then, a few days later, it suddenly leaps to a totally different order 
and focuses on regulatory issues, like the US FAA's growing reliance on 
self-regulation by vendors. We've grown accustomed to this kind of 
non-narrative trajectory from countless fiascos; and we know what 
sometimes comes next, 'investigative journalism,' that is, journalism 
that delves into the gruesome technical details and argues, in essence, 
that these technical details are metonyms for larger problems, and that 
we can use them as opportunities for social action and reform of 'the 
system.'
This journalistic template has a history. I know the US, other nettimers 
will know how it played out in other regions and countries. A good, if 
slightly arbitrary place to start is Rachel Carson's 1962 book _Silent 
Spring_ and Ralph Nader's 1965 book _Unsafe at Any Speed_. (It isn't an 
accident that Carson's work opened up onto environmental concerns, 
whereas Nader's was more geeky in its focus on technology and policy: 
there's an intense gender bias in how journalism identifies 'issues.') 
From there, the bulk of ~investigative journalism shifted to militarism 
(i.e., Vietnam: defoliants like Agent Orange, illegal bombing 
campaigns), political corruption (Watergate), intelligence (mid-'70s: 
the Pike and Church committees looking into CIA abuses etc), nuclear 
power (Three Mile Island), military procurement, policy and finance 
(HUD, the S&Ls, etc), etc, etc. I've left out lots of stuff, but that's 
the basic drift, although these decades also saw an immense rise of 
investigative focus on environmental issues. Whether the results of all 
that environmental work have been satisfying I'll leave as an exercise 
for the reader.
That template goes a long way toward explaining how and why journalistic 
coverage of 'tech' is so ineffectual now. It can't get its arms around 
*the* two big issues: the extent to which the US has become a laboratory 
for national-scale experiments in cognitive and behavioral studies, and 
the pathological political forms that 'innovation' is enabling around 
the world. The US has ironclad regulations and norms about experimenting 
on human subjects, which are enforced with brutal mania in academia. 
But, somehow, we haven't been able to apply them to pretty much 
everything Silicon Valley does. Instead, we get ridiculous kerfuffles 
about Facebook experimenting with making people 'sad' or the tangle 
around Cambridge Analytica, which is both real and borderline-paranoiac. 
The blurriness of that boundary is a by-product of, if you like, the 
micro-epistemological divide that separates general journalism and 
investigative journalism. We're terrible at 'scaling' this kind of 
analysis or down: either from subtract to concrete, by saying 'WTF is 
going on?!' and channeling it into broad, effective limitations on what 
infotech companies can do, or from concrete to abstract, by catching 
companies like FB doing dodgy stuff then hauling them in front of 
Congress and asking 'where else does this approach apply?' (Europe has 
been much better at this, but the cost of doing so is other train wrecks 
like the fiasco with Articles 11 and 13.)
That was precisely the divide that started this thread, when Brian 
attacked Morlock over whether the MCAS system was a discrete 
implementation of AI. Brian was right, but my point was that it doesn't 
matter because Morlock's broader point was right and (imo) matters much 
more. Does the MCAS mechanism in Boeing's 7373 implement AI properly 
speaking? Who cares? Are Boeing and all the other aircraft manufacturers 
drowning in misplaced faith in machine 'intelligence' in every aspect of 
their operations? Yes. And does that misplaced faith extend far beyond 
individual companies? Yes. And the result is a systemic failure to think 
critically about where this misplaced faith is leading. The
standard technophilic response is to universalize ideas like 
'technology,' 'innovation,' and 'complexity,' and to argue that they're 
inexorably built into the very fabric of the universe and therefore 
historically inevitable. But none of that is true: what *is* true, as 
Felix argued, is that 'complexity' is an externality of human activity, 
and that we seem to be doing a crap job of imagining political economies 
that can strike a balance between our discoveries, on the one hand, and 
human dignity, on the other.
We need some sort of global Pigouvian tax on complexity: a way to siphon 
off the profits generated by messing around the edges of complexity and 
invest it in, for lack of a better word, simplicity. If we last long 
enough, we might even get it; but, like Morlock, I fear it'll take a 
true global catastrophe for people to realize that. Why? Because the 
best 'actually existing' institutions we have now — representative 
government, the media, extra- and supra-national entities like 
coordinating bodies, and (LOLZ) stuff like nettime — all get lost when 
they try to cross the divide between general and specialized forms of 
knowledge.
And that, BTW, is why second-order cybernetics is so relevant: it was 
interested in observing systems rather than observed systems. It's also 
why, wording aside, Morlock's first impulse was right: addressing this 
problem will require an end to corporate obfuscation of liability and 
identifying exactly who, within these faceless edifices, is making 
specific choices that betray the public trust. He doesn't seem to think 
we can manage it; I think we can. But squabbling over the fact that he 
said 'burning people at the stake' won't help — which is, again, why I 
asked:
And that begs an important question that leftoids aren't prepared to answer because, in a nutshell, they're allergic to power: what *would* be appropriate punishments for people who, under color of corporate activity, engage in indiscriminate abuses of public trust.
Cheers,
Ted

#  distributed via <nettime>: no commercial use without permission
#  <nettime>  is a moderated mailing list for net criticism,
#  collaborative text filtering and cultural politics of the nets
#  more info: http://mx.kein.org/mailman/listinfo/nettime-l
#  archive: http://www.nettime.org contact: nettime@kein.org
#  @nettime_bot tweets mail w/ sender unless #ANON is in Subject: