Hi there!


If you're reading this message, it's probably because you've blocked/disabled JavaScript in your browser.


That's cool and all – no judgement here. But the thing is, our site really needs JavaScript to show you all the neat stuff we do here at Kairos Media.


One especially neat thing? We set up our website to prevent all tracking tags and scripts from firing until after a user has consented to their use. So, if you wanted to check this page out with JavaScript enabled, we'll let you do that without winding up on any ad retargeting lists.


We even added a big, sticky banner to the first page you'll visit to explain all this...but the banner requires JavaScript, too. Sort of a chicken-and-egg situation, really...


Anyways, hope that's cool with you! Sincerely,


– The Management

The Internet Has Gone Foul. [Part II]

Welcome back.

For those of you just joining us, I ended Part I where Stafford Beer’s 1973 work Designing Freedom begins, in a “little house… in a quiet village on the western coast of Chile”. With the benefit of hindsight and history, we now know that Beer was describing the town of Las Cruces — but we’ll get to all of that, in the fullness of time.

In rapid order, Beer employs this pleasant scene as grand metaphor, introducing what will form the central thesis of the lectures that follow:

Although we may recognize the systemic nature of the world, and would agree when challenged that something we normally think of as an entity is actually a system, our culture does not propound this insight as particularly interesting or profitable to contemplate. Let me propose to you a little exercise, taking the bay I am looking at now as a convenient example. It is not difficult to recognize that the movement of water in this bay is the visible behaviour of a dynamic system: after all, the waves are steadily moving in and dissipating themselves along the shore. But please consider just one wave. We think of that as an entity: a wave, we say. What is it doing out there, why is it that shape, and what is the reason for its happy white crest? The exercise is to ask yourself in all honesty not whether you know the answers, because that would be just a technical exercise, but whether these are the sorts of question that have ever arisen for you. The point is that the questions themselves — and not just the answers — can be understood only when we stop thinking of the wave as an entity. As long as it is an entity, we tend to say, “Well, waves are like that”: the facts that our wave is out there moving across the bay, has that shape and a happy white crest, are the signs that tell me “It’s a wave” — just as the fact that a book is red and no other colour is a sign that tells me “That’s the book I want”.

The truth is, however, that the book is red because someone gave it a red cover when he might just as well have made it green; whereas the wave cannot be other than it is because a wave is a dynamic system. It consists of flows of water, which are its parts, and the relations between those flows, which are governed by the natural laws of systems of water that are investigated by the science of hydrodynamics. The appearances of the wave, its shape and the happy white crest, are actually outputs of this system. They are what they are because the system is organized in the way that it is, and this organization produces an inescapable kind of behaviour. The cross-section of the wave is parabolic, having two basic forms, the one dominating at the open-sea stage of the wave, and the other dominating later. As the second form is produced from the first, there is a moment when the wave holds the two forms: it has at this moment a wedge shape of 120°. And at this point, as the second form takes over, the wave begins to break — hence the happy white crest.

Now in terms of the dynamic system that we call a wave, the happy white crest is not at all the pretty sign by which what we first called an entity signalizes its existence. For the wave, that crest is its personal catastrophe. What has happened is that the wave has a systemic conflict within it determined by its form of organization, and that this has produced a phase of instability. The happy white crest is the mark of doom upon the wave, because the instability feeds upon itself; and the catastrophic collapse of the wave is an inevitable output of the system.

I am asking “Did you know?” Not “did you know about theoretic hydrodynamics?” but “did you know that a wave is a dynamic system in catastrophe, as a result of its internal organizational instability?” Of course, the reason for this exercise is to be ready to pose the same question about the social institutions we were discussing. If we perceive those as entities, the giant monoliths surrounding pygmy man, then we shall not be surprised to find the marks of bureaucracy upon them: sluggish and inaccurate response, and those other warning signs I mentioned earlier. “That is what these entities are like”, we tend to say — and sigh. But in fact these institutions are dynamic systems, having a particular organization which produces particular outputs. My contention is that they are typically moving into unstable phases, for which catastrophe is the inevitable outcome. And I believe the growing sense of unease I mentioned at the start derives from a public intuition that this is indeed the case. For people to understand this possibility, how it arises, what the dangers are, and above all what can be done about it, it is not necessary to master socio-political cybernetics. This is the science that stands to institutional behaviour as the science of hydrodynamics stands to the behaviour of waves. But it is necessary to train ourselves simply to perceive what was there all the time: not a monolithic entity, but a dynamic system; not a happy white crest, but the warning of catastrophic instability.

— Stafford Beer, “Designing Freedom”, p. 2-3 [emphasis mine]

Perhaps you were still curious about those fellows on the poles, with the ball and their cat, from the lecture notes I shared towards the end of Part I. Don’t worry; Beer offers plenty more of those in his lecture notes (and here they are!). But I mention them here so as to anchor this next excerpt, from the conclusion of Beer’s first lecture in Designing Freedom:

Remember these aspects of our work together so far. A dynamic system is in constant flux; and the higher its variety, the greater the flux. Its stability depends upon its net state reaching equilibrium following a perturbation. The time this process takes is the relaxation time. The mode of organization adopted for the system is its variety controller. With these points clearly in our minds, it is possible to state the contention of this first lecture with force and I hope with simplicity. Here goes.

Our institutions were set up a long time ago. They handled a certain amount of variety, and controlled it by sets of organizational variety reducers. They coped with a certain range of perturbations, coming along at a certain average frequency. The system had a characteristic relaxation time which was acceptable to society. As time went by, variety rose — because the relevant population grew, and more states became accessible both to that population and to the institutional system. This meant that more variety reducers were systematically built into the system, until today our institutions are nearly solid with organizational restrictions. Meanwhile, both the range and the frequency of the perturbations has increased. But we just said that the systemic variety has been cut. This produces a mismatch. The relaxation time of the system is not geared to the current rate of perturbation. This means that a new swipe is taken at the ball before it has had time to settle. Hence our institutions are in an unstable condition. The ball keeps bobbing, and there is no way of recognizing where an equilibrial outcome is located.

If we cannot recognize the stable state, it follows that we cannot learn to reach it — there is no reference point. If we cannot learn how to reach stability, we cannot devise adaptive strategies — because the learning machinery is missing. If we cannot adapt, we cannot evolve. Then the instability threatens to be like the wave’s instability — catastrophic.

I said before that there are solutions, but I have also shown that they concern organizational modes. They concern engineering with the variety of dynamic systems. By continuing to treat our societary institutions as entities, by thinking of their organizations as static trees, by treating their failures as aberrations — in these clouded perceptions of the unfolding facts we rob ourselves of the only solutions.

— Stafford Beer, “Designing Freedom”, p. 5-6 [emphasis mine]

Getting Back to Bigness

Speaking of “a dynamic system in catastrophe, as a result of its internal organisational instability”, let’s return to the subject of the modern commercial Internet. Here are, to borrow another turn of phrase, the “happy white crests” of the digital advertising economy today:

“Look at this. Look at this. Isn’t it beautiful? Look! Show the baby, show the baby! See?! Show your baby.”

1: Ad Fraud

Fraud is endemic to and rampant throughout the online ad supply chain, to such an extent that one may now assume that about ten cents out of every dollar in “media spend” invested into digital channels will go to fraudsters; which is to say, into the hands of criminal and/or terrorist groups. More conservatively, you might only lose a nickel of every dollar to such fraud. It could just as well be a quarter out of every dollar, or more.

Here I feel I should pause a moment (already!) and bolster this claim a bit:

  • In June 2016, the World Federation of Advertisers published its “Compendium of Ad Fraud Knowledge for Media Investors“, which found that “[t]he cost of ad fraud is estimated at $7.2 billion in this report, or approximately 5%, of the total global”, and grimly predicted: “ad fraud will, on the current trajectory, be second in revenue only to cocaine and opiates by 2025 as a form of crime.”
  • The “2018-2019 Bot Baseline Report“, by the Association of National Advertisers and cybersecurity firm WhiteOps, claimed that “[w]e project losses to fraud to reach $5.8 billion globally in 2019”, explaining that “Today, fraud attempts amount to 20 to 35 percent of all ad impressions throughout the year, but the fraud that gets through and gets paid for now is now much smaller”.
  • A 2023 report by Juniper Research forecast that “the global potential advertising spend lost to fraud will rise from $84 billion in 2023 to $172 billion by 2028”. It further claimed that “North America will account for the highest proportion of advertising spend lost to fraud over the next five years”, and that “[i]n 2023… 17% of clickthroughs on PCs and desktops were illegitimate and could not provide ROAS [return on ad spend]”.
    • It should be noted that the same firm had previously estimated the cost of ad fraud, as of 2019, at $42 billion, while projecting that this figure would reach $100 billion as of the year 2023. This would place their starting estimate well above the contemporaneous ANA/WhiteOps report, and suggests that their analysis has relied in the past on perhaps-overly pessimistic models of the rate of growth.
  • Lunio’s Wasted Ad Spend Report 2024 (produced in collaboration with IAS and Scope3) claimed that “[w]e evaluated a sample of more than 2.6 billion paid ad clicks…over the course of twelve months (May 2022 — May 2023)… 8.5% of all paid traffic was invalid”. They further estimated that about $55 billion had been lost due to what they term “invalid traffic” (IVT) in 2022, and predicted that about $71 billion in online ad-spend would be sacrificed to IVT globally in the coming year.

    This last example is especially noteworthy for having made some serious effort to quantify and distinguish the impacts of fraudulent ad-spend per advertising platform, as shown below:

2: Consent Management

The broad public and technological shift towards “privacy by design” has quickly rendered obsolete many of the ways that digital marketers have traditionally gathered deterministic data on their target audiences online, whether for the purposes of planning, executing, improving, or reporting on their work.

This has been fantastic news, as far as human societies are concerned. As for digital marketers, however, it does present certain challenges. Here is one: in my recent experience, it is common to find that anywhere from ten to forty per cent of a given advertiser’s “ad clicks”, as reported by Meta Ads Manager (I refer here specifically to ‘link-clicks‘) are not attributable to a known “landing page view”, as reported via the Meta Business Tools. Similar discrepancies are routinely found in the “ad clicks” versus “landing page view” figures reported by other ad-buying platforms, as well.

(To be clear, it is certainly still feasible to achieve better attribution rates via these tools — it’s also true that some degree of variance between the two values is normal and explicable. What I am saying is that more often than not, for whatever reason, North American marketers do not know how to go about fixing this, and/or have not paid anyone to know and do this work.)

Many of the significant events and milestones in this broader systemic shift have occurred since 2018; that was the year in which the EU’s “General Data Protection Regulation” (GDPR) came into force. Many of the actions required to adapt to a newer, privacy-preserving Web have yet to be undertaken by the vast majority of Canadian marketers. Regarded as a class, I’d hazard that the “average” Canadian marketing operation today lags comfortably one full decade behind the state of the art, with regard to “privacy by design” initiatives.

I wrote at length about the parallel histories of Canadian and European privacy law on this very blog back in August 2019, and while I’m still quite pleased with that post, I do have a few thoughts to add:

  • One thing I got right was how the GDPR establishes a much higher threshold for what constitutes “valid consent” to the collection and/or use of one’s personal data when browsing online. The GDPR accomplishes this by defining “consent” as… erm… well, consent. Digital publishers (and advertisers) have traditionally relied on the basis of “implied consent” to justify much of their data-harvesting for marketing (and/or other) purposes, which is, as you well know, some bullshit.
  • One thing I would’ve gotten right, but didn’t really explore was how Big Tech (and also regular-sized tech) responded to GDPR by essentially turning around to their customers and saying “right, our systems fully comply with all global privacy laws, so long as you never send us anything that would violate any privacy laws, so please tick this legally-binding checkbox which indicates that your organisation has promised never to do that”. This allows firms like Google or Meta to continue offering (more or less) the same toolsets as before, while offloading a good deal of their own liability risk onto customers, like you and me.
  • One thing I got wrong was my prediction that “data rights” would find its way into the broader public discussion surrounding the 2019 federal election held later that year. My contention had been that any party forming government was obliged to adopt GDPR-like protections under Canadian federal law, or lose access to EU markets. In actual fact, a watered-down version of the GDPR is still toddling its way through Parliamentary committees, four years on from then. In short, we still haven’t done anything. I had dared to hope that we might’ve.

The next “shock” to many Canadian marketers in this respect will likely be Google’s roll-out of “Consent Mode v2”, which seems poised to become mandatory as of March 2024 for any Google Ads customers subject to Google’s “EU user consent policy” (which is to say, most of us). One very new support doc in Google’s “Tag Manager Help Center” explains: “To keep using measurement, ad personalization, and remarketing features, you must collect consent for use of personal data from end users based in the EEA and share consent signals with Google”.

Next year, in early March, the Digital Markets Act (or DMA) will impose new obligations on large digital companies operating in the European Economic Area (or EEA), notably bringing additional consent requirements. In addition, on the platform technology side, we know that third-party cookeis have been deteriorating as browsers have been phasing them out, and customers choose not to allow them.

So these regulatory, platform, and technology changes are not “new news”, but 2024 is the inflection point for marketers where your digital marketing and measurement will rapidly lose effectiveness and functionality if you don’t take action.

— Jane Lascelles, EMEA Regulations Program Manager, Google Marketing Platform, November 28th, 2023

This is, of course, a perfectly reasonable and wholly feasible request of Google to make. The trouble will come once a majority of Canadian marketers come to realise that they must undertake roughly five years worth of long-neglected data stewardship initiatives, within a critical project timeline of three months.

3: Third-Party Data (or ‘Garbage In, Garbage Out’)

In addition to its impacts on deterministic data (most often collected and/or shared in a “first-party” context), another natural (and obvious) consequence of “privacy by design” being adopted by governments and regulators worldwide has been a step-change deterioration in “signal quality” for virtually all forms of probabilistic audience data (most often collected and/or shared in a “third-party” context). Yet despite this, such data continues to inform many key decisions relating to digital campaign budgets, planning, targeting and/or optimisation, as it has done for the past fifteen years or so.

One would expect that this should be (or at least have been, by now) a much bigger problem than it is turning out to be, in practice. Indeed, one of the more curious things I’ve witnessed in recent years is the degree to which, at practically every level, many marketers remain oblivious to such changes, and their often detrimental impacts on the overall efficacy of digital marketing efforts.

This is perhaps a fitting time to cite the compelling, peer-reviewed evidence, published in 2019, which suggested that third-party audience targeting data tends to be less accurate than random guessing, even when segmenting users along fairly basic demographic factors, such as age and/or gender cohort. And that’s to say nothing on the utility of so many far more granularly scoped “interest-based” segments one will find on offer.

The reasons for this are complex and manifold, but my personal favourite explanation comes from a four-year-old podcast episode, featuring Pontiac Intelligence CEO Keith Gooberman (the fellow who tees him up here is Ryan McConaghy, the brain formerly(?) behind What Happens in Ad Ops and currently serving as “VP, Global Monetization Strategy” at Condé Nast). The key bit comes at 17:00 – 22:15; the clip below should start right about there, and I’ve transcribed it for you below:

Companies start, these data companies. So they realise through cookies that they can cookie all types of sites, and find out all types of detail of the users that come through there, and segment them. Okay, so this is as remarketing and third-party data, this is getting going 2005-10, they’re starting to realise what they can do. Big cookie footprints. Okay.

So let’s take one of these companies that started out in the data, DMP space, okay? They already raised $50-$100 million. They’re already in debt. It’s a venture play, venture money behind it, they’re down $100 million. Okay? So, if you’re down $100 million, you’d better build yourself a scalable business. You can’t just build a business that makes $1 million a year and be like “We did it!”, right? It’s like “No no, sir, we’re gonna need a bigger business than that. You took a hundred million of our dollars”.

…So let’s say one of these companies, these early DMPs, okay? They go to all the car websites. So they go to KBB[.com], Edmonds[.com], and Cars[.com], and they say “Hey, I got a proposal for ya. I’m gonna put a pixel on your site. We’re gonna collect data of who looks at a Mercedes S-Class. We’re gonna turn around, we’re gonna sell the data to BMW and Mercedes, we’re all gonna share the money, everybody goes home happy.”

The car websites are like “Yes! What a great idea! No more ads! I love it, pixels, all day, let’s go”. They then do this, okay? They get all the pixels down. And they get an audience together of people that’ve looked at the Mercedes-Benz in the last three weeks, on all these major car retailers, okay?

And they package the audience, and then they go to BMW, and they get their agency to buy against this audience, okay? They do it, they do the whole thing. The agency says “Yes, sounds interesting. Let me test $25,000 against this, okay? I’ll test— I’ll pay, I’ll put $25,000 into the media buy, I’ll pay ya $3,000 for the data”, okay?

So they run the campaign. It’s the best performing campaign that BMW has ever run on display ads, ever. It’s unbelievable! It’s the greatest thing they’ve ever seen! They’re like “This is amazing!”. They’re like “Hey, we did $25,000 last month. Let’s do $250,000 this month”. And the data guy, salesmen, you know these people, right? He’s like [rubbing hands together] “ohh mmy ggoo, oh, YES!“. Like, “Turn it UP!“.

So he goes back to the AdOps department, okay, the AdOps, y’know… these are the people like us, who’re sittin’ there, looking at the numbers. And they’re like “I got BMW, they want every person who’s looked at a Mercedes, give ’em all three million cookies so they can do this buy!”.

And the data guy sits there, and he’s like [mimics typing]: “…nah, forty-seven thousand, bro. Only 47,000 people looked at the Mercedes.”

And he’s like, “No no no, I need three million, I need three million! That’s… they’re not gonna be able to spend more” — are you ready for the punchline? — “they’re not gonna be able to spend more than $2000 with us a month if we only have forty-seven thousand”. And the data guy goes, “Uhh, I can’t make up people who like Mercedes. You said you wanted people who like Mercedes!”.

And what do you think happened in that room? You think that they ended up serving three million cookies that included anybody that’s ever looked at a car, and other people that they could find that fit the description, so they could sell it through to the client? Or you think they held true to word, and only gave ’em the Mercedes audience?

…The joke is, is that the CEO who raised $100 million was like “I can’t, I can’t make enough money back if it’s not, if we don’t, if we don’t dilute these audiences. I can’t make this work if we don’t dilute the audiences“. And that discovery, my friend, there’s your reason why all the audiences are worthless. I get it! Listen, people look at cars. You know how many people are in the “auto intender” audience in most of these things? 144 million cookies. I’ll tell ya, it’s a big business. It ain’t that big of a business.”

— Keith Gooberman, CEO, Pontiac Intelligence

4: Recommender Discrimination Systems

The choices an online publisher makes as to what content they present — or “surface” — to a given user are the most direct and effective lever available to them to influence how their audience “engages” with them, and thus, to engineer their own profitability.

In order to better predict which sorts of content will keep users “engaged” (and thus eligible to be advertised at), such systems run experiments by presenting some “test subjects” (that is, you or I) with some given stimuli, then observing their interaction (or lack thereof), and noting this for future reference. Both the accuracy of such content prediction, and its potential to logically segment and stratify users on the basis of some shared set of data-traits, improves as the scale of experimentation increases. This process goes on continuously and forever — or at least, the machines are often designed to assume that it will.

That is the essence of the wargame in which your individual mind now engages, against sophisticated networks of computers, programmed by one or more teams of clever and well-paid “domain experts”, whenever you thumb through your social media feeds. The algorithms tasked with making such decisions are termed “recommender systems“, but insofar as these work to monopolize an ever-greater share of each individual user’s attention, in order to maximise an online publisher’s profits, they could perhaps be more usefully termed “discrimination systems” — for the sole animating purpose of such systems is to mark us out from one another.

What these systems tend to work out, of course, is that the kinds of content which best captures our attention (and this is essential: within the contexts they are tasked with “optimising”) tends to be strongly emotively charged. Fight, flight, or freeze. They can, and often do, become adept at surfacing content which makes us feel angry, or sad, or happy, or any number of more nuanced emotions, and they show an exceptional knack for doing this just as one’s own, individual attention threatens to wane. In these respects, recommender systems do superlative work.

There are, of course, other dimensions of human psychology in which recommender systems are proving less helpful.They would seem to have the tendency of amplifying extremist perspectives and/or other “fringe” views, and of prompting dysmorphic disorders in children. They have the general effect, at a population level, of fracturing our civic spaces, corroding our politics, and degrading our mental health.

Many would consider these to be negative aspects of such systems; something worth investigating further, and perhaps somehow correcting. Unfortunately, there seems to be little overlap between this growing school of thought and the profit-driven actions of those publishers we’ve been mentioning. It is true of advertisers — yes, us — who blithely encourage the ever-more pervasive deployment of such systems, whose workings we do not and cannot even claim to know, in pursuit of an ever-rising stock of potential ad impressions, addressable to the individual mind, with ever-growing accuracy and precision.

5: Generative AI-yai-yai

…Look, you don’t really need me to churn out any more words on the potential impacts of “generative AI” tools on Field X , do you? Please, let’s just assume you’d rather that I didn’t, and say only that the advent of large language models (LLMs), deployed at scale, is likely to exacerbate all of the problems I’ve just listed above.

That’s not to say that the technologies themselves are nefarious (Beer would object to the premise, I’m sure), even if there are compelling arguments that the most popular contemporary systems are. What I am suggesting is that among the “first movers”, the most enthusiastic adopters of such technologies, are likely to be those who would leverage them towards nefarious ends. I fear that a great many of them shall do quite well for themselves, indeed.

Another Beer, Please

While I promise not to make a habit of this, there are times when any further editorialising would only serve to muddle the meaning of one’s source. So, in much the same way that I began this post by quoting far too much from Beer’s first lecture, I’d like to close by quoting far too much from his second:

Our culture has had nearly 300 years to understand the problems of Newtonian physics. It has had more than half a century to get its grip on relativity theory and the second law of thermodynamics, knowing that it is at any rate possible to make general statements about the physics of the universe. Not all of us, I dare say, would care to answer basic questions about these two, although one might have supposed that the culture would have imbibed them by now. The observed fact is that the culture takes a long, long time to learn. The observed fact also is that individuals are highly resistant to changing the picture of the world that their culture projects to them.

I am trying to display the problem that we face in thinking about institutions. The culture does not accept that it is possible to make general scientific statements about them. Therefore it is extremely difficult for individuals, however well intentioned, to admit that there are laws (let’s call them) that govern institutional behaviour, regardless of the institution. People know that there is a science of physics; you will not be burnt at the stake for saying that the earth moves round the sun, or even be disbarred by physicists for proposing a theory in which it is mathematically convenient to display the earth as the centre of the universe after all. That is because people in general, and physicists in particular, can handle such propositions with ease. But people do not know that there is a science of effective organization, and you are likely to be disbarred by those who run institutions for proposing any theory at all. For what these people say is that their own institution is unique; and that therefore an apple-growing company bears no resemblance to a company manufacturing water glasses or to an airline flying aeroplanes.

The consequences are bizarre. Our institutions are failing because they are disobeying laws of effective organization which their administrators do not know about, to which indeed their cultural mind is closed, because they contend that there exists and can exist no science competent to discover those laws. Therefore they remain satisfied with a bunch of organizational precepts which are equivalent to the precept in physics that base metal can be transmuted into gold by incantation—and with much the same effect. Therefore they also look at the tools which might well be used to make the institutions work properly in a completely wrong light. The main tools I have in mind are the electronic computer, telecommunications, and the techniques of cybernetics…

Now, if we seriously want to think about the transmutation of elements in physics, we will recognize that we have atom-crackers, that they will be required, and that they must be mobilized. We shall not use the atom-crackers to crack walnuts, and go on with the incantations. But in running institutions we disregard our tools because we do not recognize what they really are. So we use computers to process data, as if data had a right to be processed, and as if processed data were necessarily digestible and nutritious to the institution, and carry on with the incantations like so many latter-day alchemists.

— Stafford Beer, “Designing Freedom”, p. 11 [emphasis mine]

I have already suggested a list of three basic tools that are available for variety amplification: the computer, teleprocessing, and the techniques of the science of effective organization, which is what I call cybernetics. Now I am saying that we don’t really use them, whereas everyone can assuredly say: “Oh yes we do.” The trouble is that we are using them on the wrong side of the variety equation. We use them without regard to the proliferation of variety within the system, thereby effectively increasing it, and not, as they should be used, to amplify regulative variety. As a result, we do not even like the wretched things.

[…]

It’s obvious really, once the concept of variety and the law of requisite variety are clear. The computer can generate untold variety; and all of this is pumped into a system originally designed to handle the output of a hundred quill pens. The institution’s processes overfill, just as the crest of the wave overfills, and there is a catastrophic collapse. So what do we hear? On no account do we hear: “Sorry, we did not really understand the role of the computer, so we have spent a terrible lot of money to turn mere instability into catastrophe.” What we hear is: “Sorry , but it’s not our fault—the computer made a mistake.”

Forgive my audacity, please, but I have been “in” computers right from the start. I can tell you flatly that they do not make mistakes. People make mistakes. People who program computers make mistakes; systems analysts who organize the programming make mistakes; but these men and women are professionals, and they soon clear up their mistakes. We need to look for the people hiding behind all this mess; the people who are responsible for the system itself being the way it is, the people who don’t understand what the computer is really for, and the people who have turned computers into one of the biggest businesses of our age, regardless of the societary consequences. These are the people who make the mistakes, and they do not even know it. As to the ordinary citizen, he is in a fix—and this is why I wax so furious. It is bad enough that folk should be misled into blaming their undoubted troubles onto machines that cannot answer back while the real culprits go scot free. Where the wickedness lies— and wickedness is not too strong a word—is that ordinary folk are led to think that the computer is an expensive and dangerous failure, a threat to their freedom and their individuality, whereas it is really their only hope.

There is no time left in this lecture to analyse the false roles of the other two variety amplifiers I mentioned—but we shall get to them later in the series. For the moment, you may find it tough enough to hear that just as the computer is used on the wrong side of the variety equation to make instability more unstable, and possibly catastrophic, so are telecommunications used to raise expectations but not to satisfy them, and so are the techniques of cybernetics used to make lousy plans more efficiently lousy.

— Stafford Beer, “Designing Freedom”, p. 14 [emphasis mine]

Well, you heard the fellow. In Part III (stay tuned!), I still hope to focus on the fourth of Beer’s 1973 Massey Lectures, titled “Science in the Service of Man“. In this post, I’ve mainly dealt with the problems facing digital marketers right now, as we enter another new year. With the next, I’ll aim to explore (what I see as) their key underlying causes — and why despite their relative obviousness, a “solve” often remains unlikely, absent some significant shift in the prevailing industry paradigm. Also, since I’ve now managed to go 12,000 words without more than vaguely gesturing in the direction of Project Cybersyn (which, holy shit, you guys, Project Cybersyn!!), I’ll try to work that in somewhere, too.

Thanks for reading.

-R.