Discussion about this post

User's avatar
OpheliaPG's avatar

I’m still in disbelief that there are still people on social media platforms telling others it’s not a genocide and voting for red/blue team will make it better. I often feel like an alien dropped in some weird place. And why I appreciate your sit reps with Jon and others just to feel like I’m not going mad.

Expand full comment
andreas5's avatar

"That’s why I never saw the point of the supposedly groundbreaking stories about Lavender AI or the Where’s Daddy algorithm for killing people in Gaza. Really? Is an algorithm driving the bulldozer over hundreds of people?"

I recommend this interview by Shir Hever in the analysis: https://theanalysis.news/gaza-ai-targeting-a-cover-for-genocide/

Let me share two critical points from the interview on the function of "AI" systems within the genocidal war machine. These points resonate deeply with the problem of getting people to murder other people on a large scale and the psychological and moral toll on the foot soldiers ordered to commit genocide:

(1) Machine learning systems provide target lists much faster and do not run out of targets (which occurred after a few weeks in previous wars on Gaza). Naturally the military "value" of an AI target designation are much poorer than those compiled by a human intelligence officer whose function the system is designed to simulate. However, this is apparently a feature, not a bug, as it allows the leadership to order what amounts to carpet bombing of highly populated civilian areas without issuing explicit orders to do so (which could get them sued for war crimes) and without directly breaking military protocols and rules of engagement (which could lead to breakdown of discipline).

This is mostly drawn from a previous story in 972 magazine that has more relevant framing than the one linked here by Justin: https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/

(2) Machine learning systems in the Israeli military are not trained to pick "correct" targets, they are instead optimized to get a human operator to sign off on operations. Note that the machine learning systems are designed by ex-military intelligence officers (the military to startup pipeline) who are well positioned to design systems to fit in and effectively subvert military intelligence practices. [IDF procedures were already worse than US rules of engagement which failed to prevent massacres in occupied Iraq...].

[Machine learning systems need training data on which to optimize a reward function. Large Language Models receive the beginning of a known text and models are rewarded on relative success on predicting the next word in the sequence . A system thus trained can generate new texts from prompts essentially as a statistical average of previous texts.]

Compiling training data based on the degree of military "success" of a given strike would involve follow-up reporting that Greg Stoker says the IDF doesn't even do at all (cf his recent interview with Justin on the AEP). Instead, the IDF is already sitting on a pile of previous target designations that were overruled by human operators and a pile of target designations that were signed off on.

According to Shir Hever, they simply declared that getting the green light for an operation is itself the goal of the system and trained it to craft target designations (based on text clippings taken from intelligence reports) in whichever way would most likely lead to a human operator waving it through: Since it is hard work to diligently read through the thousands of target designations produced daily by the gibberish machine, human operators tend to decide by title and general framing. E.g. Israeli intelligence officers may get second thoughts and decide to actually read the target designation report (and notice that it is AI gibberish) if the target is a Palestinian woman; so computer generated target designations come to include language such as "target is a known male Hamas militant" wherever possible.

Some of Shir's points necessarily involve conjecture and I have not seen them elsewhere (and he is an economic rather than military analyst). From my understanding of machine learning systems his descriptions sound all too plausible, however. Clearly this dystopian implementation of "AI" greatly exacerbates the grotesque amount of suffering created directly by the bombings; it also is unlikely to "solve" the problem of high suicide rates of the key human cogs in the genocide machine...

Expand full comment
11 more comments...

No posts