• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

The anti-AI thread

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
News from early March that I've only just seen:

It seems that Amazon is pro labelling AI mistakes as "human error", while no doubt lauding any successes as being due to AI.
"Amazon Web Services suffered a 13-hour outage in December after engineers let its Kiro AI coding tool update code without requiring any oversight
...
Engineers must get two people to review changes before deployment, use a formal documentation and approval process, and follow stricter automated checks"
It's pretty fascinating to me that Amazon, of all places, would not have already had these policies in place. I do wonder if the "engineers let its Kiro AI coding tool" is missing a bit where engineers were told to let Kiro do this by a manager. Based on my experience that seems a more likely scenario, engineers generally aren't fond of having to clean up messes in production and having to deal with the follow-up meetings that come as a result, and would likely have raised objections. But who knows, some engineers also aren't good at their jobs.
 
"Amazon Web Services suffered a 13-hour outage in December after engineers let its Kiro AI coding tool update code without requiring any oversight
...
Engineers must get two people to review changes before deployment, use a formal documentation and approval process, and follow stricter automated checks"
It's pretty fascinating to me that Amazon, of all places, would not have already had these policies in place. I do wonder if the "engineers let its Kiro AI coding tool" is missing a bit where engineers were told to let Kiro do this by a manager. Based on my experience that seems a more likely scenario, engineers generally aren't fond of having to clean up messes in production and having to deal with the follow-up meetings that come as a result, and would likely have raised objections. But who knows, some engineers also aren't good at their jobs.

This is the most likely scenario, as Amazon has previously been the subject of stories indicating how use of AI has been literally mandated by management.

In my general experience, many managers have few to no skills comparable to those of the people they are managing. What they have in place of such skills is a "I am right, I am in charge of you, and you'll do what I say" mentality. I'm sure Amazon, like most other workplaces, is full of such managers. You know, the ones that got their jobs not because of their capabilities but rather due to who they knew, who they were related to, how well they were able to backstab more effective employees, or even all three.

So, I figure it is highly likely that such an ignorant manager told some engineers to "let the AI take care of it", and the engineers just gave up arguing and did what they were told. Or, even worse, a particularly ignorant manager just simply bypassed the responsible engineers entirely and instead directly instructed the AI to do it....all the while lacking the knowledge to judge the consequences of what could (and eventually did) occur.
 
This is the most likely scenario, as Amazon has previously been the subject of stories indicating how use of AI has been literally mandated by management.

In my general experience, many managers have few to no skills comparable to those of the people they are managing. What they have in place of such skills is a "I am right, I am in charge of you, and you'll do what I say" mentality. I'm sure Amazon, like most other workplaces, is full of such managers. You know, the ones that got their jobs not because of their capabilities but rather due to who they knew, who they were related to, how well they were able to backstab more effective employees, or even all three.

So, I figure it is highly likely that such an ignorant manager told some engineers to "let the AI take care of it", and the engineers just gave up arguing and did what they were told. Or, even worse, a particularly ignorant manager just simply bypassed the responsible engineers entirely and instead directly instructed the AI to do it....all the while lacking the knowledge to judge the consequences of what could (and eventually did) occur.
It often comes down to a misunderstanding at what AI can do vs what it cannot do, and how often it can make mistakes with what it can do. They are getting better, quite good even, but miss rate can still depend on the model used and sometimes multiple swings are needed to get a hit (and woe to you if those swings do the wrong thing).

I've been saying for a while now that there needs to be an 'AI -whatif' switch, or some kind of sandboxing system that can replicate AI behavior before it commits to something in the wild.
 
I've been saying for a while now that there needs to be an 'AI -whatif' switch, or some kind of sandboxing system that can replicate AI behavior before it commits to something in the wild.
Seems like that's something that could be done outside of the AI itself. Even with my dumb little tasks, I always test first if I'm not exactly sure what the tools I'm using are gonna do, especially if data integrity's on the line. Can't imagine someone just YOLOing AWS or other big systems :^/
 
Seems like that's something that could be done outside of the AI itself. Even with my dumb little tasks, I always test first if I'm not exactly sure what the tools I'm using are gonna do, especially if data integrity's on the line. Can't imagine someone just YOLOing AWS or other big systems :^/
True but you can reasonably reduce the blast radius by at least acknowledging what the thing you're trying to do is supposed to do. If I run rm -rf /home/osiris i've got a pretty good idea of what that's doing, if I tell an AI to delete my home folder and it does a lookup of the account that the process is running under (whoops that was SYSTEM!) and tries to dump %WinDir%\system32\config\systemprofile for some god forsaken reason, I won't know it until it's already run the command. But hey at least in a mature system we'll have it logged!
 
It often comes down to a misunderstanding at what AI can do vs what it cannot do, and how often it can make mistakes with what it can do. They are getting better, quite good even, but miss rate can still depend on the model used and sometimes multiple swings are needed to get a hit (and woe to you if those swings do the wrong thing).

I've been saying for a while now that there needs to be an 'AI -whatif' switch, or some kind of sandboxing system that can replicate AI behavior before it commits to something in the wild.
Hm, based on the black box nature of them, is it viable to create one that does this reliably?
 
It's pretty fascinating to me that Amazon, of all places, would not have already had these policies in place

Notes:

1. You would be....amazed...at how many Fortune 500 companies are run by some random old spreadsheet by like one random dude who has been there forever lol. Or literal mainframes. One of my earliest jobs out of college involved a Windows server running a Mainframe emulator tied to Windows 95 machines running Terminal Emulators (and it was WICKED FAST lol). @Red Squirrel holla lol

* Up to 80% of the world’s business transactions still touch a mainframe at some point lol
* COBOL is still used by over 50% of global banking systems. Around 30 billion lines of COBOL code are still in active use worldwide.
* Fortran from 1957 is still heavily used in climate modeling, aerospace, and nuclear simulations

Airliners?

* Many FAA ATC core systems trace back to the 1960's thru 1980's
* Many primary radar networks were installed in 1970's thru the 1990's
* Some core reservation systems date back to 1970's mainframes

2. The earth is not millions of years old. Recorded history is not ~6,000 years old. The reality is that the human experience is ~70 years old at any given time. Everything is run by beginners. We phase out the older generation & replace them with twice the newbies at half the cost & then lose all of the great stuff we learned lol. Then we repeat the same lessons over & over again because everyone is:

a. Tired
b. On a budget
c. Stressed out
d. Dealing with deadlines
e. Subject to the need for short-term results
f. Not paid enough to care

Which causes companies to miss obvious signs:

* Ignoring major tech shifts (Kodak, Nokia, Blockbuster)
* Arrogance or complacency (BlackBerry, Yahoo)
* Bad strategic decisions (Toys "R" Us, Sears)
* Unethical behavior (Enron)

Or more recently:




Same patterns in the world of AI:

1. Lack of human oversight
2. Weak data governance
3. Over-trust in AI autonomy
4. Poor integration into real workflows
5. Hype-driven deployment instead of ROI-driven design

16 billion leaked credentials last year:


3. Nearly everything is setup dumb. We tend to lack solid, documented support systems as individuals & organizations. We should be able to demonstrate the logic og why things are the way they are. We should have a tested "plan B" setup already. Applies anywhere & everywhere. Remember COVID?





The people & companies who bother to lock things down have it MADE!!


 
Hm, based on the black box nature of them, is it viable to create one that does this reliably?
Honest answer? Not really, unless you train the AI yourself which is .. onerous. The best way is to integrate it into a system that does things reliably and use it to call those, which still has a place in modern IT/systems management, but not as cut and dry as 'replace your staff with this homunculus of copper and runes'.

Conduit over mobile VPN, hitting open-webui with an MCP to n2n/other orchestrators, with full logging back to splunk/whatever? now you're starting to turn up the heat.
 
Back
Top