[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

What’s the real purpose of Google’s AI Overview? It’s affiliate marketing spam! That’s what it’s for.

Pivot readers know how to get around the AI overview. You click on the “web” tab, you use the udm14 hack or a browser extension, you just use a different search engine.

But these are all individual responses — and this is a systemic problem. The default is AI Overview, and approximately nobody works around them.

So why can’t you find proper reviews of anything in Google? Because search engine optimisation has flooded all the top slots with cheap clickbait for affiliate marketing, mostly generated by a chatbot, and not with honest reviews.

Our heroes for today are housefresh.com, a site that does nothing but reviews of air filters. HouseFresh got sick of being ignored in favour of SEO slop, and then even more ignored by the AI slop that the Overview generated from the SEO slop.

So HouseFresh did a bit of detailed journalism about how the AI Overview recommends products: “Beware of the Google AI salesman and its cronies.” [HouseFresh]

The AI Overview is either made-up slop or a slightly-wrong summary of an actual source that Google cribbed from. But the product reviews are unusually bad.

The Overview prefers to use the worst sources. It keeps using press releases, product listings and sponsored reviews — that is, information straight from the manufacturer.

The sources at the side of the AI Overview — which 99% of searchers never check — are fake too. HouseFresh found that 19.5% of the claimed sources didn’t even mention the product.

HouseFresh even got the AI Overview to spit out the standard template it seems to use for its product reviews:

The [Model] air purifier is [a worthwhile investment]/[generally considered a good value for its price]/[a worthwhile purchase]. It’s [praised]/[well-regarded] for its ability to [clean the air]/[remove particles]/[clean large rooms]. Whether the [Model] is worth it depends on individual needs and priorities.

If you ask Google about a product that doesn’t exist, it’ll confidently reply with a template like this telling you what a great buy the nonexistent gadget is.

Google tries really hard never to say negative things about a product, including machines that are famous for how bad they are. If you ask for a list of cons about a product, it’ll start hallucinating those too.

Finally, Google puts sponsored product listings above the AI Overview , because Google is about the ads. The purpose of the AI Overview is to point you at the paid advertisements. It’s affiliate marketing with one layer of indirection.

The other favourite source for AI Overviews is Reddit. There’s a lot of humans with genuine opinions on Reddit. But HouseFresh found entirely spam subreddits, all posts promotional and written by chatbots. No human ever goes there. The purpose is for Google to see the spam, think “oh it’s on Reddit” and feed it to the search index and the AI overviews.

Bloomberg asked Google CEO Sundar Pichai about the lack of separation between search and advertising, and he said: “commercial information is information, too.” Yeah thanks, Sundar. [Bloomberg]

Google had a good earnings report for second quarter, so Pichai’s done his actual job. And that means there are no brakes on the AI spam train. [Alphabet, PDF]

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Tech recruiters Howdy surveyed 1,047 full-time professionals who say they use AI. 30% of survey subjects said they were managers. [Howdy]

75% of subjects were expected to use AI at work. 22% felt pressured to use AI when it was not appropriate. So 16% of subjects just say they used the AI when they didn’t! People feel they would be putting their job at risk if they pushed back on AI directives.

A full one-third said that fixing the AI’s mistakes took as much time as not using the AI.

More positively, 72% felt energised by using AI in their workflow! Though this survey screened for people who were already using AI at work.

Howdy found that AI didn’t speed up a lot of people’s work — but the Register notes that two-thirds of employees just accept the AI’s output, right or wrong. [Register]

Others lied to their boss that they didn’t use AI. They worried they’d be seen as freeloaders. That’s a valid worry — if you’re visibly using AI at work, your coworkers really do think you’re an incompetent lazy arse. AI users expected to be considered “lazier, more replaceable, less competent, and less diligent.” Wonder why. [PNAS]

The answer is: use AI when you’re told, don’t use it even when you’re told, fake using AI, don’t fake using AI, just push the AI answers through, and if they give you a metric, hammer it into the ground.

The use case for chatbots at work is: we pretend to work, and they pretend to pay us.

 

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Google claims the AI Overview on every search result — a frequently-wrong summary of other people’s work — sends a ton of clicks back to the original publishers they ripped off.

This is false. Pew Research tracked browser usage for 600 people during March 2025. Pew didn’t just ask questions, they measured on the test subjects’ devices.

When a search result has an AI overview, only 1% of searchers click on any of the supposed links to the original sources next to the overview. 99% just go right on past. [Pew; Pew]

Most searchers don’t click on anything else if there’s an AI overview — only 8% click on any other search result. It’s 15% if there isn’t an AI summary.

Another survey by search engine optimisers Authoritas showed the original sources lost 79% of clicks if there was an AI summary first, which is on the order of those two Pew results. [Guardian]

Google said the Authoritas study was “inaccurate and based on flawed assumptions and analysis,” because AI searches are great, apparently. Google did not outline what they thought a good analysis might look like, and they notably didn’t try to trash-talk Pew’s on-device survey.

Emanuel Maiberg from 404 Media wrote a story about Spotify serving fake AI-synthesized songs from dead musicians. Google served up an AI overview — but the source link was not to 404 itself, but to spam sites with chatbot-generated summaries of 404 stories. [404, archive]

This sort of thing is why UK publishers have taken Google to UK and EU competition authorities over this nonsense.

 

We can't have nice fountains

Jul. 23rd, 2025 09:38 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

San Francisco's Vaillancourt Fountain may soon meet its end, despite public outcry:

Despite considerable public support for its preservation, San Francisco's 1971 Vaillancourt Fountain is not being included in plans for a new city park where it currently stands.

Although city officials insist no final decision has been made regarding the fate of the sculptor Armand Vaillancourt's Brutalist masterpiece in Embarcadero Plaza, the fountain was not a part of any of the planning activities prepared for a public consultation that took place on 8 July. [...]

Moreover, some of the city's justifications for redeveloping the site -- such as the apparent need to attract more visitors -- were undermined by the results of its own surveys. Most respondents indicated that they came to the park and plaza often (once a week or more) and walked or took public transit to get there. If the aim is to redevelop the site to better suit the needs of San Franciscans, the public consultation made it seem like many of those needs were already being met and that the public's primary concern, preservation of the Vaillancourt Fountain, was not being considered at all.

Previously, previously, previously, previously, previously, previously.

[syndicated profile] jwz_blog_feed

Posted by jwz

BOSTON -- Viral footage from GWAR's popular Analingus Cam supposedly shows a tech CEO engaged in a lewd sexual act with his mistress, multiple people in desperate need of a fun distraction confirmed.

"We had just sprayed the crowd with gallons of alien cum and everyone was whipped into a frenzy. We cut to the Analingus Cam and a few couples were going at it, having a fun time slurping a butt, and having their butt slurped. Then we cut to this one couple and the entire mood changed," said Blöthar the Berserker. "There he was, tongue so deep inside her that I'm pretty sure he was licking the back of her eyeballs. That's when he realized the camera was on him. He dove off screen and I said something like 'Nobody licks their wife's asshole that clean, he's either having an affair or he's a scumdog whose appetite for ass knows no bounds.' The clip really took off, my grandmother High Priestess Ejaculah even shared it on Facebook." [...]

"I want to apologize to my wife, my children, my entire family, and the employees at Golaxiar who trust me. I had a major lapse in judgement. I was trying to enjoy a private moment with a work colleague, and things went too far," said Baines. "But what does this say about our society? Where two consenting adults cannot even enjoy mutual anal satisfaction without it being broadcast across a Jumbotron and plastered on social media. We should all be ashamed. This is a societal problem, GWAR shows should be a safe space for ass eating, and this is making a mockery of a beautiful sexual act."

At press time,Tech CEOs began forwarding a new memo with best practices on how to properly cheat on your wife without ever getting caught.

Previously, previously, previously, previously, previously.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

UPDATE: reply from Lemkin appended at end, he says it really did happen this way

Jason Lemkin of AI startup SaaStr posted a Twitter thread on Thursday 17 July, claiming he had vibe-coded an app using Replit, and vibe coding was the best thing ever! [Threadreader]

But then … Replit deleted his production database! And then it lied at length about what it had just done. Oh no!

Lemkin posted his story and it went viral, as every AI hater was delighted by the fairy tale of an AI bro getting his comeuppance.

But this is all a bit too good to be true. It feeds your and my preconceptions perfectly. The extended conversations with the chatbot detailing its crimes read to me like fiction. It’s too pat.

There is no checkable evidence that this happened. Replit did confirm that SaaStr was a Replit customer, that a database had been deleted, and that they restored it. So a database was deleted, at some point, in some manner. That’s the entire verifiable facts of the tale. [SF Gate]

SaaStr is an AI consultancy, conference organiser, training, you name it. It calls itself “the world’s largest community of SaaS executives, founders, and entrepreneurs.” As far as I can tell, it’s mostly Lemkin doing conferences and some other stuff. [SaaStr]

There’s a link on the front page of SaaStr to “Ask SaaStr AI.” You can ask the Jason Lemkin AI to write you a little story about a startup vibe coding disaster. You’ll get back a story that follows a very similar template to Lemkin’s post! [Delphi]

Jason Lemkin is a competent businessman of many years’ experience. He specialises in AI. He blogs at length about AI. He knows how a large language model actually works. He knows perfectly well that chatbots cannot be meaningfully claimed to “lie” or “deceive” — they’re just plausible text generators.

Then he proceeds to dive into a production database that’s essential to his business and vibe-codes some rubbish into production?

It’s possible someone could do something like this in real life. We’ve posted a few examples here. But I find it difficult to believe that Lemkin in particular was this foolish. He knows too much about generative AI to have plausibly done any of this himself.

This story reads like a tale for social media with no verifiable details, and I am going to go so far as to say that I think this was a promotional story and that the claimed events did not in fact literally happen.

I accept that I could be wrong — in which case, I look forward to some checkable evidence.

We get a lot of the sort of AI story that’s really just a promotion. Take care when forwarding headlines that may be just a bit too good to be true.

Update: Jason Lemkin replied on YouTube: “I hear you. It’s exactly what happened fwiw. HOWEVER, if I knew what I knew today, I would not have trusted the AI. I would have realized when the Agent said the database was deleted — it might not actually be true. Lesson learned. There is evidence, Replit saw it all in logs 😉” [YouTube]

 

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

I asked a couple of months ago: if AI code was such an obviously superior engineering method, why didn’t we see AI code everywhere in open source?

Since then, there have been a few performative AI code pulls in open source projects. They’re generally from competent coders who know what they’re doing — but every one is a guy who wanted to say he’d got AI into a project to make a point. I mean, well done, aren’t you clever.

Sasha Levin is a Linux kernel contributor. He gave a talk about using a bot to write Linux code and Linux Weekly News wrote it up. Except Levin hadn’t bothered telling the maintainer of that system, Steven Rostedt, that it was bot code. Especially when it turned out the code had a bug in it. Rostedt was a little annoyed: [LWN; LWN]

The real issue is transparency. We should not be submitting AI generated patches without explicitly stating how it was generated. As I mentioned. If I had known it was 100% a script, I may have been a bit more critical over the patch. I shouldn’t be finding this out by reading LWN articles.

Apart from performative dickheads, there’s one other huge problem: copyright. Open source licenses rest on copyright. It’s copyrighted, and you have a specific license to use it.

AI copyright is up in the air. The US copyright office says pure AI output is not copyrightable. But if you write something with a bot and you get back a copy of the training data, then you’ve copied it. And if you put that copied code into an open source project, it might be a time bomb for them.

In January, someone wanted to put a bot-coded driver into the FreeBSD operating system so it could use the exFAT file system, which is used on SD cards in cameras. [FreeBSD]

Trouble is, the only open source exFAT code the bot could have trained on was the Linux code. And the Linux kernel license is not compatible with FreeBSD. You can’t drop Linux code into FreeBSD.

FreeBSD committer David Chisnall checked over the code, and some of the files turned out to be pretty close copies of the Linux code. As Chisnall said: [lobste.rs]

They used a machine that plagiarised Linux for them. This doesn’t reduce the legal liability for the project.

Some projects explicitly ban AI code. The Git, QEMU, and NetBSD projects have all forbidden AI code entirely. [LWN; GitHub; NetBSD]

Because you cannot declare the provenance. You can’t show where the bot got the code from.

You could probably sneak some bot code into these projects if you were some sort of performative dickhead. But you’re the one signing off on it and you’ve just nuked your reputation.

None of this is about quality. It’s all about provenance. And not wanting some performative dickhead to drop you in the poop.

INCOMM Scientology Keyboard

Jul. 21st, 2025 05:07 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

The Xenu-Speak in these ads is amazing:

"The datum here is that power is proportional to the speed of particle flow. This is the real secret behind the prosperity which can arise in connection with a computer operation." -- HCO PL 16 FEB 1984 WHAT IS A COMPUTER?

"The point here is that this planet's current popular concept of how to use a computer would make a baby laugh. It's a bit like using a nuclear reactor to boil water, which is also being done on this planet at this time."

"Real computers will be applied to Scientology management. They are being programmed based on OEC Policy and HCOBs and will have something to operate on which is very sane, logical and pro-survival. The potentials of the whole track computer will be harnessed to the tremendously powerful administrative policy of Scientology to help get that policy IN an increase production."

"An OT look at the org is from above it and outside it. The observer is not being hit by the noise. So he gets a broader view. Further he is viewing over a longer time span, often years. There's no place in the org itself where all its history is available in minute detail."

XScreenSaver, Wayland and locking

Jul. 20th, 2025 01:34 am
[syndicated profile] jwz_blog_feed

Posted by jwz

Welp, I got crickets in answer to my question, "How do I find the wl_surface backing an Xwayland X11 Window?" and that does not bode well for XScreenSaver ever being able to lock your screen on Wayland.

The only existing mechanism for third-party screen locking is the "ext-session-lock-v1" protocol and that API requires you to provide it with a set of wl_surfaces to display while locked.

So either I need to find the wl_surface of an existing X11 window, or I need a way to create an X11 window that is a child of a new wl_surface. Without that, we're dead in the water. I don't think there's another way.

(The "ext-session-lock-v1" model is a terrible idea and also doesn't work on Gnome or KDE, but it is the only game in town.)

Previously, previously, previously.

[syndicated profile] jwz_blog_feed

Posted by jwz

Rümeysa Öztürk:

I was looking forward to taking a short walk and catching up with friends at the interfaith center, when I was suddenly surrounded and grabbed by a swarm of masked individuals, who handcuffed me and shoved me into an unmarked car.

Suddenly, I was thrust into a nightmare. Thousands of questions crept up in the hours that passed. It felt like an eternity as my shackled body was jostled from one location to another. Who were these people? Had I been a good enough person if today was my final day? I was relieved to have finished filing my taxes, but I couldn't shake the thought of a book I needed to return to the library.

Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

"Visit Dubai!"

Jul. 19th, 2025 07:11 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

Caitlín Doherty:

I went to Dubai wrongheaded. I learnt nothing and left nauseated. I had thought it would be fun -- funny, even -- to experience the disorientation of standing at the pivot point between two world systems. Instead, it was merely disorientating -- sickeningly so. There are hells on earth and Dubai is one: an infernal creation born of the worst of human tendencies. Its hellishness cannot be laid solely at the feet of the oligarchs, whose wealth it attracts, nor the violent organised criminals who relocate there to avoid prosecution. It is hellish because, as the self-appointed showtown of free trade, it provides normal people with the chance to buy the purest form of the most heinous commodity: the exploitation of others. If you want to know how it feels to have slaves, in the modern world -- and not be blamed openly for this desire -- visit Dubai. But know that you will not be blameless for doing so. Every Instagram post, every TikTok video, every gloating WhatsApp message sent from its luxury is an abomination. A PR campaign run by those who have already bought the product, and now want only to show you that they can afford it. [...]

If you try to humanise the place you will lose your mind. If you ask yourself what the woman at the hair-braiding stand left behind to be here, and why, you will lose your mind. If you accept the kindness of the staff with whom you make a paltry effort to speak each morning as they clear your dirty breakfast plate, you will lose your mind, because your tip is the only kindness you can meaningfully offer in return. Trying to attend to your own towel by the pool might cause the man who stands for hours in the ferocious sun to do so for you to lose his job. Being served makes us cruel infants. It demeans us all.

Previously, previously, previously, previously, previously, previously, previously, previously.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

When you’ve read this post, go watch the worked examples on YouTube. [YouTube]

Our intrepid friend Aron Peterson, a.k.a. Shokunin Studio, is a media production guy. He follows the AI image generators and he experiments with them.

But he also knows what professional quality is — and what it isn’t. So Aron bravely risked his sanity on a month of Google Veo credits.

Veo 3 is still being hyped as the hotness in AI video generators. It’ll definitely replace directors and actors, you betcha. Six months, it’ll be amazing, probably.

Sadly, we’re cut off at Day 25, because Aron ran out of Veo credits. Some of you asked him to do Midjourney’s new video generator too, but he says after a month of Veo he’d find it more aesthetically pleasing to stab his eyes out with a spork: [LinkedIn]

It was really gross to watch AI videos for 22 days in a row. Even if it was only for 30 minutes a day it was very off putting and unnatural.

AI shills keep claiming that AI video generators, such as Veo 3, can make professional quality work, and they are lying. But talk is cheap – we’ve got the fails!

Most of this last selection fails in the same ways it failed in previous weeks. One thing we saw a couple of times is that if you do four clips in a row, it pixelates badly:

The difference in quality from the first clip to the last clip is a massive difference and we can see the gradual deterioration from clip to clip. For some reason, probably GPU memory related, the clips deteriorate the longer a scene goes on. The last clip is super pixelated with big compression artefacts all over the shot.

Aron did a torture test of getting Veo to render a jazz orchestra:

There’s no relationship between the musicians and the music. The fingers on the trumpet players don’t move sometimes. Sometimes a viola and a trumpet become fused together into one instrument. We hear cymbals when the drummer is hitting the snare drum or floor tom.

What have we learnt from these four weeks of really bad AI animation? We’ve learnt that every single person who says that AI video is up to real production work is full of it. It can’t follow a script, it can’t follow direction, it can’t give consistent characters, the training data keeps leaking through, the sound effects are inept.

And Google will randomly censor things — because you know that otherwise, Veo would mostly get used as a gore and pornography generator.

The AI marketers will insist that it’ll be perfect in six months, and you must be prompting it wrong. Fortunately, they’ve got a course they can sell you on how to prompt Veo properly!

Aron gives us the final word on this relentlessly dumb experiment.

I burned over 1000 credits today as Veo was not following the prompts well (I wrote stage plays at college, studied screenwriting at Birkbeck Uni and rewrote dialog for actor friends in case any prompt gurus need to know). That’s almost 10% of the credits you get with the $250 a month subscription. You could cook four meals for that price.

I went back and counted all the failed generations that Google charged for and it’s over 20% of the credits. That means if a user pays $250 a month, about $60 dollars is stolen. $720 dollars a year.

Generative AI seems to be a good business for tech companies and the vulture capitalist class. They don’t have to deliver what they promise most of the time and don’t need to refund you either.

This test was meant to be 30 days long. It lasted 25 days until the credits ran out and I barely generated a handful of clips each day. Not one clip could be called cinematic or broadcast quality. Not because of shortcomings on my side, but because I pushed the Veo to do things that AI guys and marketing people don’t want to show you in their mostly motionless slow motion boring demo videos.

There’s nothing democratising here. This is an expensive slot machine that outputs slop 98% of the time.

Don’t forget to check the videos from week 1, week 2, and week 3. I’m slowly recovering and the daily Pivots should start again next week. In the meantime, enjoy some horrors within human comprehension!

 

 

[syndicated profile] jwz_blog_feed

Posted by jwz

Elon Musk's Neuralink filed as 'disadvantaged business' before being valued at $9 billion:

Elon Musk's health tech company Neuralink labeled itself a "small disadvantaged business" in a federal filing with the U.S. Small Business Administration, shortly before a financing round valued the company at $9 billion. [..] Neuralink's filing, dated April 24, would have reached the SBA at a time when Musk was leading the Trump administration's Department of Government Efficiency. [...]

According to the SBA's website, a designation of SDB means a company is at least 51% owned and controlled by one or more "disadvantaged" persons who must be "socially disadvantaged and economically disadvantaged." An SDB designation can also help a business "gain preferential access to federal procurement opportunities," the SBA website says. [...]

Jared Birchall, a Neuralink executive, was listed as the contact person on the filing from April. Birchall, who also manages Musk's money as head of his family office, didn't immediately respond to a request for comment.

Previously, previously, previously, previously.

Xwayland wl_surface

Jul. 19th, 2025 01:01 am
[syndicated profile] jwz_blog_feed

Posted by jwz

Dear Lazyweb, how do I find the wl_surface backing an Xwayland X11 Window? This says that the window will be sent a WL_SURFACE_ID ClientMessage, but this appears not to be the case.

Previously.

Secret Police need Secret Lawyers

Jul. 16th, 2025 06:28 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

Law and Order ICE: "In the criminal justice system, the people are represented by two separate yet equally important groups. The secret police who throw suspects into unmarked vans, and the secret attorneys who deport them to third world concentration camps. These are their stories."

ICE Lawyers Are Hiding Their Names in Immigration Court:

"I've never heard of someone in open court not being identified," said Elissa Steglich, a law professor and co-director of the Immigration Clinic at the University of Texas at Austin. "Part of the court's ethical obligation is transparency, including clear identification of the parties. Not identifying an attorney for the government means if there are unethical or professional concerns regarding [the Department of Homeland Security], the individual cannot be held accountable. And it makes the judge appear partial to the government."

"Part of the court's ethical obligation is transparency, including clear identification of the parties." [...]

When Judge ShaSha Xu omitted the ICE lawyer's name, Attorney Jeffrey Okun asked her to identify who was arguing to deport his client. She refused.

Xu attributed the change to "privacy" because "things lately have changed." Xu told Okun that he could use Webex's direct messaging function to send the ICE lawyer his email, and the ICE lawyer would probably respond with her own name and address. [...]

The government's mystery attorney, who was prosecuting both Okun's and Gonzalez-Venegas's clients, wore glasses and a navy blue suit; her hair was pulled back primly from her face. She spoke quietly, with a tinge of vocal fry. Her name, according to Gonzalez Venegas, was Cosette Shachnow.

Shachnow, 33, began working for ICE in 2021, shortly after she graduated from law school, according to public records and her LinkedIn account. The latter lists "Civil Rights and Social Action" among her "favored causes."

Previously, previously, previously, previously, previously, previously.

RSS validator

Jul. 15th, 2025 05:47 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

Has the W3C RSS validator started blocking AWS? It works when I load it from home but from my server I always get 429. For a couple weeks now.
[syndicated profile] wizards_spaceships_feed

Posted by Space Wizard

Sci-fi and fantasy has always had an optimistic current, whether it’s utopian space-age cities or noble chosen ones vanquishing a dark lord. In recent years, with the popularity of romantasy and cozy fantasy, it’s easier than ever to immerse yourself in a more hopeful world.

But what if instead we made you feel bad?

In this wide-ranging conversation with multitalented author and editor Nick Mamatas, we talk about crushing your joy, corporate greed in the publishing and music production industries, and why the 80s and 90s were objectively the best time for music.

TRANSCRIPT

Show notes:

Have you bought Blight yet? If you like dark things, you should!

“Do You Love the Colour Of the Sky?” in Trollbreath Magazine

The post Season 2, Episode 3: Against Hopepunk ft. Nick Mamatas appeared first on Wizards and Spaceships.

Profile

sabotabby: raccoon anarchy symbol (Default)
sabotabby

July 2025

S M T W T F S
  1 23 45
678910 1112
13 1415 1617 1819
2021 22 2324 2526
2728293031  

Style Credit

Page generated Jul. 26th, 2025 06:48 pm
Powered by Dreamwidth Studios

Expand Cut Tags

No cut tags

Most Popular Tags