[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

It’s not just Microsoft slowing down on data centres — it’s Amazon too.

, an analyst at Wells Fargo, put out a note yesterday: [Seeking Alpha]

Over the weekend, we heard from several industry sources that AWS has paused a portion of its leasing discussions on the colocation side (particularly international ones) … the positioning is similar to what we’ve heard recently from MSFT.

This will mainly be about AWS customers and what AWS needs to serve them.

AWS is not cancelling existing deals — it’s “digesting aggressive recent lease-up deals.”:

It does appear like the hyperscalers are being more discerning with leasing large clusters of power, and tightening up pre-lease windows for capacity that will be delivered before the end of 2026.

Kevin Miller, AWS vice president of global data centres, denied everything on LinkedIn: [LinkedIn, archive]

We continue to see strong demand for both Generative AI and foundational workloads on AWS … This is routine capacity management, and there haven’t been any recent fundamental changes in our expansion plans.

Could just be routine! But then, Microsoft said the same things when it first came out in February that they were cancelling data centre leases. And Microsoft kept saying this was all fine, nothing to see here, even after they admitted a couple of weeks ago they were cancelling data centre leases.

We’re sure it’ll all be okay and the AI bubble will ride high through 2025. Up and to the right! Maybe.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

People seem to like the T-shirts and merchandise, which warms my heart. And there’s a 25% off sale running as I write this. I have no idea how long it’ll last, so grab stuff while you can.

I’ve also tweaked the design for the mugs — see the site mockups above. The Redbubble store is the worst to try to navigate (and it’s even worse to try to sell stuff on), but here are the links to the best versions:

(Though I’ve just seen a photo of this mug in the wild and it looks pretty darn cool.)

We also finally have the long-demanded Bitcoin: It Can’t Be That Stupid design!

If you buy something, please let me know what country you’re in and how good it is. Reports on quality are positive so far.

If you get a shirt, remember the design is a plastic print — cold gentle wash, inside-out. I’ve worn UK-printed Redbubble shirts of the loved one’s designs for years and they’re robust shirts that wear reasonably.

The Redbubble mockups are pretty indicative of what an item will look like. I also wear the shirts in the videos.

If you get a shirt, I would be delighted if you post it to social media and tag me! Cheers to lenne0816 — “he can’t be that fat, he must be taking the picture wrongly!”

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

OpenAI is outraged at DeepSeek stealing the data that OpenAI rightfully, er, fair-used.

So OpenAI’s higher-tier models now require solid verification of identity! And … the verification doesn’t work very well. [OpenAI]

A company representative must provide photos of official paper documentation. The workflow presumes you’re doing this on a phone to give it a selfie.

One individual can verify once every 90 days. iIf you need to verify more than one organisation, you’re out of luck.

Some countries use electronic identification, and the paperwork OpenAI is demanding doesn’t exist. [OpenAI forum; OpenAI forum]

OpenAI also wants your credit card — even if you have credits and you remove your card between top-ups because of OpenAI’s habit of randomly billing card holders. [Open AI forum; OpenAI forum]

OpenAI verifies identity via Persona. Persona’s privacy policy has “user select:none” in the web page CSS, so you can’t cut and paste the policy text. Persona can send your data to unspecified third parties. And there’s a waiver against bringing a class action. [Persona]

If you ask Persona customer service for help, they refer you back to OpenAI customer service. [OpenAI forum]

OpenAI is even demanding verification from Microsoft Azure customers — who have no contract with OpenAI at all. [Microsoft; OpenAI forum]

OpenAI is trying very hard to pump its customer numbers lately, in the search for ever more venture funding. But this ham-fisted identity system is more likely to send customers to see what DeepSeek has to offer — and cheaper.

Fire

Apr. 20th, 2025 08:40 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

Dear Lazyweb,

Seeking source for simple but believable OpenGL 3D fire and smoke simulation that does not use GLES / GLSL shaders, OpenGL 1.3 only.

  • "Why would you do that to yourself?" Reasons.
  • "No, but here's one using Shader Language." You are not helping.
  • "I know you said 3D, but here's a flat one." Again, not helping.
  • "Here's someone's thesis that doesn't have runnable code." No.

Previously, previously.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

iOS developer Joachim Kurz found a bug that was bad enough to go through the pain of Apple’s horrible Feedback Assistant.

Feedback Assistant ideally wants a bug to have your system logs to send to Apple’s developers. A privacy notice warns you there’s likely to be private information in those logs.

But Apple’s added a new line to the privacy notice: [hachyderm.io]

… agree that Apple may use your submission to improve Apple products and services, such as training Apple Intelligence models and other machine learning models.

There is no opt-out.

This is not a notice of, say, using machine learning to analyse bug reports. It’s specifically reserving the right to train the Apple Intelligence product on your private logs.

This seems a good way to discourage bug reports. No reports means no bugs, right?

(Kurz says he’s seen Apple treat bug reports that way. “We didn’t get any reports from users about this bug you described, can’t be a big issue.”) [hachyderm.io]

We don’t know that Apple is as yet training its AI on private system data from bug reports. But they very much reserve the right to in the future.

There’s a corporate push to put Apple Intelligence into everything. This notice can be presumed to have passed muster with legal.

The fun part is that Apple Intelligence is barely functional trash that everyone hates. It’s Apple’s worst product since Apple Maps. Apple Intelligence will likely be shut down once the AI bubble passes. And good riddance.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

The LLM is for spam. Community colleges in California are suffering an absolute flood of fake student applicants that seem to be fraudsters backed by chatbots. [Voice of San Diego]

Many community colleges offer online courses. When students flood in, the teachers are delighted! Then they find that out of over 100 students, maybe 15 are real.

The rest are scammers who signed up to get state and federal financial aid, using a stolen identity.

The scammer finds a name and a social security number. They sign up for a full course load. They stick around long enough to get their Pell grant and cash out. Then they get a new identity and start again. [Voice of San Diego; San Francisco Chronicle, 2023]

Generative AI makes it much easier for a fake student to hold on until they get the payout.

In the year up to January 2024, about a quarter of all community college applicants in California seem to have been fake. By early 2025, it was around 37%. [CalMatters]

The teachers did not sign up to be bot cops. Eric Maag of Southwestern College said, “We’re having to have these conversations with students, like ‘Are you real? Is your work real?’”

The teachers are begging the state and the local district to put in more effort to verify applicants — all student applications go through the state first. We’ll see if the recent press attention gets them moving.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

A “reasoning AI” is a large language model — a chatbot — that gives you a list of steps it took to reach a conclusion. Or a list of steps it says it took.

LLMs don’t know what a fact is — they just generate words. So chatbots are notorious for hallucinating — when it generates a pile of plausible text that’s full of factual errors, but it’s the right shape for whatever you were asking for.

Given this, it’s no surprise that when you ask a “reasoning” model to explain its reasoning steps, it just generates something that looks like a list of reasoning steps — and not necessarily anything it actually used.

Anthropic is an LLM vendor founded by AI doom mongers who left OpenAI. They’re extremely into the idea that the default behaviour of AI is that the robot is plotting against us. Anthropic was responsible for a pile of preprints claiming an LLM was trying to deceive researchers, and in every case it turned out the researchers told the LLM to behave that way.

Even Anthropic couldn’t really spin this present study — of course the BSing chatbot was going to BS about how it was BSing! Though their paper talks about “faithfulness” instead of, say, “accuracy,” because they just can’t stop anthropomorphising the chatbot. [Anthropic; Anthropic, PDF]

Other researchers looking at reasoning LLM have found the same behaviour — that everything an LLM puts out is confabulated, so when it explains its behaviour, that’s confabulated too. Because they’re confabulation machines, not true-or-false machines.

Transluce is a San Francisco “AI safety” charity which brags that it analyses LLMs using other LLMs. Transluce was shocked to find that OpenAI’s o3 model “frequently fabricates actions it took to fulfill user requests, and elaborately justifies the fabrications when confronted by the user.” [Transluce; Transluce]

Transluce doesn’t have a visible cohort of AI doomers, but they do insist on describing this behaviour, not as inaccuracy, but as a not being “truthful” — even as an LLM has no concept of true or false. It doesn’t even have concepts. [Twitter, proxy]

Benj Edwards at Ars Technica wrote up Anthropic’s findings. The first version of his story cut’n’pasted Anthropic’s humanised descriptions of the chatbot so egregiously that an editor fixed the headline and thoroughly revised the story to “reduce overly anthropomorphic language.” [Ars Technica, archive of 10 April 2025]

This is a good worked example of gullible AI journalism and how to fix it a little. Here are my three favourite changes. First example:

  • Old version: “they sometimes hide their actual methods while fabricating elaborate explanations instead.”
  • New version: “the ‘work’ they show can sometimes be misleading or disconnected from the actual process used to reach the answer.”

Second example:

  • Old version: “OpenAI’s o1 and o3 series SR models deliberately obscure the accuracy of their ‘thought’ process, so this study does not apply to them.”
  • New version: these “SR models were excluded from this study.”

For the third example, Edwards went so over-the-top that the editor had to insert a whole new paragraph about how “It’s important to note that AI models don’t have intentions or desires”.

Let’s hope Ars does better going forward.

[syndicated profile] jwz_blog_feed

Posted by jwz

Narc dot AI:

American police departments near the United States-Mexico border are paying hundreds of thousands of dollars for an unproven and secretive technology that uses AI-generated online personas designed to interact with and collect intelligence on "college protesters," "radicalized" political activists, and suspected drug and human traffickers [...]

Massive Blue, the New York-based company that is selling police departments this technology, calls its product Overwatch, which it markets as an "AI-powered force multiplier for public safety" that "deploys lifelike virtual agents, which infiltrate and engage criminal networks across various channels." [...]

404 Media obtained a presentation showing some of these AI characters. These include a "radicalized AI" "protest persona," which poses as a 36-year-old divorced woman who is lonely, has no children, is interested in baking, activism, and "body positivity." Another AI persona in the presentation is described as a "'Honeypot' AI Persona." Her backstory says she's a 25-year-old from Dearborn, Michigan whose parents emigrated from Yemen, and who speaks the Sanaani dialect of Arabic. The presentation also says she uses various social media apps, that she's on Telegram and Signal, and that she has US and international SMS capabilities. Other personas are a 14 year-old boy "child trafficking AI persona," an "AI pimp persona," "college protestor" [sic], "external recruiter for protests," "escorts," and "juveniles.

Previously, previously, previously, previously, previously, previously, previously, previously, previously.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Alexa is a digital assistant in gadget form. It’s a simple keyword bot with a voice interface. It’s very limited, but it works well enough, and a lot of people like it.

But it’s also a money-loser. Amazon hoped Alexa users would buy a ton of stuff from Amazon all the time and sign up for Amazon Prime, and that didn’t really happen.

By August 2024, Amazon had desperately leapt onto the AI hype train. Amazon leaked that an Alexa which replaced the keyword bot with Anthropic’s Claude chatbot would be released in October 2024. This didn’t happen. [Reuters]

By February 2025, though, Amazon felt more confident, and ran a spectacular demo of Alexa+ — the AI-backed assistant! [Yahoo!; YouTube]

Alexa+ would be $20/month, or free with Amazon Prime — which is $15/month.

Alexa users started buying up-to-date Amazon Echo gadgets just to get Alexa+. Many even signed up for Prime well ahead of time.

Amazon said in late February that Alexa+ would “start rolling out in the U.S. in the next few weeks during an early access period.” This didn’t happen. [Amazon]

In late March, Amazon did another round of PR for a launch on Monday, March 31 — a release with “some missing features,” according to “internal company documents” seen by the Washington Post. This didn’t happen. [Washington Post, archive]

Amazon has been hyping up Alexa+ again in the past week.

Panos Panay, Amazon’s Head of Devices & Services, told the Independent that Alexa+ would arrive “in the coming weeks and months.” Okay. [Independent]

Just yesterday, Amazon posted to YouTube a “recap” of the Alexa+ launch event from February. [YouTube]

Reddit r/alexa readers have been wondering where the heck their Alexa+ is. Someone posted yesterday claiming to be an early Alexa+ tester. They describe a wonderful system that does almost everything in the February demo super-well! Most readers called them out as an Amazon marketing employee immediately. [Reddit]

I’m going to state that there are still zero Alexa+ users who are not Amazon employees. And there will not be more than zero any time soon.

If there was a single verifiable report of an Alexa+ user in the wild — that is, not someone working for Amazon — it would be all over the entire tech press immediately. People like me have been scouring the internet for actual users of Alexa+ weekly. The press would jump on any actual news of the thing.

Given that, it’s entirely unclear why Amazon’s been giving Alexa+ such a marketing push this week. The only marketing Alexa+ needs is one happy customer talking about it in public — and Alexa fans are not people who hold back from talking about how much they love their Alexa.

The product just existing and not sucking would give all the hype Amazon could want. It could even be buggy but promising and it would give good hype!

So it looks very like Alexa+ is not yet good enough to risk on even one non-Amazon user.

I won’t predict when it’ll come out. They obviously jumped the gun badly announcing it in February. I suspect the product is just cooked, just can’t be trusted to work, and they can’t make it work — but managers’ bonuses are riding on Alexa+, so they’re putting pressure on the team to release it anyway.

But maybe we’ll get a first leaked report of Alexa+ next week — and maybe it’ll be an actual customer and not another Amazon marketer.

Saint Acutis of Halo

Apr. 17th, 2025 01:28 am
[syndicated profile] jwz_blog_feed

Posted by jwz

He was a kid, and now his body is coated in wax and dressed in a red track jacket, jeans, and Nikes and lies in a church in a tiny Italian hill town where people arrive on tour buses to kiss their fingers and touch the glass next to his head of thick black hair.

Before him, the most recently-canonized saints lived and died in the 1800s. Acutis is different: He had a phone! He made websites about miracles! He wore sneakers! He's "God's influencer." Supplicants see him as somehow both approachably normal and extraordinarily devout. He was reportedly "uninterested in the trappings common for a wealthy child in Milan," asking his parents to donate the money they would have spent on more designer sneakers to the poor and skipping ski trips to teach catechism instead. [...]

Coincidentally, technology is bedeviling Acutis' early days as a saint. On eBay, people are selling what they claim to be his "relics," tiny pieces of a saint's body. One anonymous seller was selling "supposedly authenticated locks of Acutis' hair that were fetching upward of 2,000 euros ($2,200 US), according to the Diocese of Assisi, before being taken down," the AP reported. "It's not just despicable, but it's also a sin," one reverend who has a tiny fragment of Acutis' hair in a chapel by his office told the AP. "Every kind of commerce over faith is a sin."

There is a lot of non-relic commerce happening at the Shrine of the Renunciation, however. It's free to enter the church, but there is a gift shop around the corner at the exit.

How about them miracles? Wikipedia:

Luciana Vianna had taken her son, Mattheus, who was born with a pancreatic defect that made eating difficult, to a prayer service. Beforehand, she had prayed a novena asking for the teenager Acutis's intercession. During the service, Mattheus had asked that he should not "throw up as much". Immediately following the service, he told his mother that he felt healed and asked for solid food when he came home. After a detailed investigation, Pope Francis confirmed the miracle's authenticity, leading to Acutis's beatification. [...]

A Costa Rican woman named Valeria had fallen off her bike and suffered a brain haemorrhage with doctors giving her a low chance of survival. Valeria's mother, Lilliana, prayed for the intercession of Acutis and visited his tomb. The same day, Valeria began to breathe independently again and was able to walk the next day with all evidence of the haemorrhage having disappeared. [...] Pope Francis presided at an Ordinary Consistory of Cardinals, which approved the canonization of 15 people, including Blessed Carlo Acutis.

Well there you have it! That's just SCIENCE.


Previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously, previously.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

Cursor is an AI-enhanced code editor. You click and an LLM auto-completes your code! It’s the platform of choice for “vibe coding,” where you get the AI to write your whole app for you. This has obvious and hilarious failure modes.

On Monday, Cursor started forcibly logging out users if they were logged in from multiple machines. Users contacted support, who said this was now expected behaviour: [Reddit, archive]

Cursor is designed to work with one device per subscription as a core security feature. To use Cursor on both your work and home machines, you’ll need a separate subscription for each device.

The users were outraged at being sandbagged like this. A Reddit thread was quickly removed by the moderators — who are employees of Cursor. [Reddit, archive, archive]

Cursor co-founder Michael Truell explained how this was all a mistake and Cursor had no such policy: [Reddit]

Unfortunately, this is an incorrect response from a front-line AI support bot.

Cursor support was an LLM! The bot answered with something shaped like a support response! It hallucinated a policy that didn’t exist!

Cursor’s outraged customers will forget all this by next week. It’s an app for people who somehow got a developer job but have no idea what they’re doing. They pay $8 million each month so a bot will code for them. [Bloomberg, archive]

Cursor exists to bag venture funding while the bagging is good — $175 million so far, with more on the way. None of this ever had to work. [Bloomberg, archive]

[syndicated profile] jwz_blog_feed

Posted by jwz

It is finished! A huge thank you to the generous donors who made this bathroom remodel possible! We managed to cover ██ ███ ██ more than half [see update below] of the cost of this project with donations, and we can't thank you all enough for that. May your butts ride eternal, shiny and chrome.

As a part of this project, we also re-built the floor in the Lounge bathroom, which apparently had been sealed poorly, with resulting leaks on the Men's Room ceiling downstairs. We also replaced the Lounge sink, which had seen better days (and had been fucked off the wall at least twice. Please stop fucking on the sinks, thanks.)

It is a universal law that any time a contractor opens things up (whether they are a plumber, electrician, whatever), the first thing they say is, "Wow, whoever was in here before was an idiot." You hear it every time, you get used to it. But the spaghetti mess of drainage they found under the Womens' Room floor was really quite something. There was a lot of, "Why, why in the world would you do this??" Oh, also the clean-out ports for the drains were tiled over. No wonder we couldn't find them.

I've mentioned before that I suspect the plumber we had back in 2000 was actively trying to sabotage us. His crew were all his idiot sons and nephews, and multiple times I witnessed a failson spend all day laying pipe, then at the end of the day Daddy finally looks at it and says, "That's wrong, do it again". Oh, I'm so happy to be your learning experience. Anyway, how do you draw the line between incompetence and malice with someone like that? Fuck you, Benny. It's been 25 years and I'm still holding a grudge because I'm stil dealing with the aftermath.

Anyway! Beautiful, shiny new toilets! With seats!

For those of you who sponsored a toilet and purchased naming rights, the plaques should be going up this week. Keep an eye out!



Update:

When I posted this, I said that we had managed to cover the entire cost of the project with donations, but after looking at the latest invoices... "LOL no".

My original guess was that this project would be about $25,000 in contractor labor, and then another $20,000 for the toilets themselves. Well it turns out we got the toilets for much less -- $9,800 -- but because of all the stupid bullshit they found along the way, the labor took three weeks instead of five days. This project should have been "dig up and re-tile a two foot hole under each toilet, move some pipes" but because of the plumbing insanity, it ballooned into "dig up and re-tile almost the entire floor of the Women's Room, and re-do all of the drain routing". So labor and rough materials came to a bit over $60,000. We took in $43,000 in donations earmarked for the toilet project, so donations covered 60% of the total.

Which is still amazing, don't get me wrong. Thank you all so much!

But damn, nightclubs are just a hole in the sand that you shovel money into.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

The dream of anyone who works with fussy humans is not to work with the fussy humans. This is the greatest promise that AI makes to the creative industries. At the least, it might shut up the whinging creatives.

The bosses can hardly resist the siren call of AI. That the customers loathe AI output hardly matters. [Business of Fashion, 2024, archive]

Swedish clothing retailer H&M ran a publicity campaign a few weeks ago. They scored quite a lot of press coverage, telling the world H&M was going to use AI models now! H&M would create thirty “digital twins” this year! The implications will astound you! [Business of Fashion]

H&M say they took multiple photos of a model to capture both their looks and their movement patterns. This was apparently enough to let them create a synthetic image of a human. And it did this well enough to generate professionally usable imagery for advertising campaigns for particular clothing items. Huge if true!

H&M is making some big promises with this pitch. But is any of this … real? It’s certainly not plausible. Think for one moment about these claims.

Fashion images — or any ad for a particular object — have to be accurate to the object. I cannot overstate how important this is. The clothes, the belt, the watch, have to be represented accurately. You can’t use whatever approximation the diffusion model thinks is a bit like the object.

The advertising companies have been testing AI images since AI image generators came out. Messing up the object the ad is for is the precise hurdle they keep falling at. Zips, buttons, logos, and writing must be correct — and AI particularly mangles all of these. You simply cannot get reliable output from AI image generators.

You could generate AI pictures, and spend a tremendous amount of time fixing up the fine details in post-production — or you could just do a shoot of the actual item with competent models and stylists and photographers and save a lot of time and money.

You’re paying to advertise the object, not the model. You cannot risk AI messing up the object.

H&M says the right-hand image at the top of this post is AI, and somehow the AI didn’t mess up the clothes in the slightest — something AI image generators are quite bad at.

So how good are H&M’s generated pictures? This particular AI publicity push started at the magazine Business of Fashion. They know all about fashion, and nothing about AI. But Business of Fashion looked at H&M’s photos and they could hardly tell real photos and AI apart!

How did they know these were AI images? H&M told them they were. Where did they get these photos? H&M selected the photos to give to them. It was a rigged demo.

The accurate clothing is a dead giveaway. I strongly suspect H&M took photos of the models and then applied a very light bit of AI to the best of these to put in some AI tells. What Business of Fashion really needed was a game of “bad photoshop or AI.”

(While I am of course not going to name any particular person or organisation, there are absolutely a ton of just straight-up liars pitching AI-generated fashion images. They know what to put into a pitch, and that it doesn’t exist doesn’t matter. They will claim they’re working with a big brand on a campaign to people who are actually working with the brand on that campaign. They’ll claim “AI-generated models” with “AI-generated clothes” that are obviously retouched human models with their clothes composited on by a team in India.)

The press fell hard for H&M’s campaign. They didn’t spend a moment just thinking whether the claims were even plausible. This particular PR nonsense campaign netted the BBC, the Guardian, and the New York Times. [BBC; Guardian, archive; NYT, archive]

The media coverage goes on at length about the social implications of replacing models with AI. These would be huge if any of this was real.

The media never touches on the social implications of companies repeatedly making these trivially bogus claims, and what the companies are trying to do here.

H&M’s valuable final product with this campaign is the controversy. “People will be divided,” says H&M creative director Jörgen Andersson. H&M wants people talking about this like it isn’t all just fake.

If this existed, then H&M would have the whip hand. So they’re pushing a campaign to make it look like they have a magic box that will keep the workers down — even though the magic box can’t possibly exist.

H&M’s goal is to put the models and stylists in their place and stop them making demands for money and working conditions, or they’ll be replaced by a machine. But the machine doesn’t exist and can’t exist.

None of H&M’s claims make any sense if you bother to apply the slightest bit of thought to them. So expect to see another story about AI digital models in three or four months.

[syndicated profile] wizards_spaceships_feed

Posted by Space Wizard

It’s been an entire year of Wizards & Spaceships! Thank you for coming along with us on this wild and magical space ride.

You can’t turn on the news or doomscroll social media without hearing about how AI will revolutionize everything. Unfortunately, the worst people in the world seem to be in charge of it. In this episode, we talk to sci-fi legend Robert J. Sawyer about what AI and transhumanism really mean for humanity and our planet and how we can stand up to corporate hype and greed.

Show notes:

The post Season 1, Episode 12: Transhumanism and AI ft. Robert J. Sawyer appeared first on Wizards and Spaceships.

[syndicated profile] pivot_to_ai_feed

Posted by David Gerard

I just posted a new video, which is a love song to my microphone. [YouTube]

There’s a bunch of links in the show notes to everything about the mic.

The video has subtitles for accessibility, and I even corrected the subtitles, so we’re covered for usability. But $5-and-up patrons also have access to the transcript, which is more convenient for reading.

Anyway, it’s a great little mic and if you buy a Røde you will not regret it.

AI post out later today.

 

 

SHIELD GONE

Apr. 13th, 2025 09:08 pm
[syndicated profile] jwz_blog_feed

Posted by jwz

Welp, now Star Wars is dead again too. That lasted like... a day.

The game was running but the screen was black. Then as I was moving the cabinet, I discovered that "percussive maintenance" made the screen come back on for a few seconds. So something is loose, but I can't tell what. When the monitor blacks out, the game continues playing; the LEDs are lit on the deflection board's low voltage supply; and spot killer is not active, so it's getting signal.

It's also probably not great, but probably unrelated, that to get 5v back on AR2 sense, I have to push like 5.8v. That's like 15% of the power radiating away somehow. Yes, I cleaned all the edge connectors.

Anyway, like I said last time, while I am happy to continue to sink money into these weird old artifacts so that future generations can experience them, I really need to find someone I can pay to fix them when they regularly break, because I'm not good enough at that. Help me find that person.

NOTE: Understand this request as if I were asking, "Do you have a local dentist that you like?"

If your answer is of the form, "No, but have you tried contacting the CEO of the American Dental Association? They probably know Hot Dentists In Your Area", or, "No, but most major metropolitan areas have dentists", you are not helping.

In fact, any answer that starts with "No but" or "Have you" is almost certainly not helping.

I shouldn't have to say this, but apparently I have to say this.

Previously, previously.

Profile

sabotabby: raccoon anarchy symbol (Default)
sabotabby

April 2025

S M T W T F S
   1 23 45
678 910 1112
131415 16 17 18 19
20 21 2223242526
27282930   

Style Credit

Page generated Apr. 23rd, 2025 07:53 am
Powered by Dreamwidth Studios

Expand Cut Tags

No cut tags

Most Popular Tags