Conversation

This is your reminder that for the past 40 years the price of consumer electronics underwent roughly 10% compound *deflation* year by year.

An entry level Android or iPhone is every non-shooty-bang-or-transportation James Bond gadget rolled into one for the inflation-adjusted cost of a 35mm film SLR or a Commodore 64 back in 1984.

It's also much more powerful than every supercomputer on the planet back then, in combination.

And it's in your pocket.
https://canada.masto.host/@graydon/113579152811903958

3
3
0

@graydon We're heading towards the buffers at the end of the track wrt. Moore's Law and the rentiers who invested in the VLSI industry still want the rate of profit they've become accustomed to, even though the cost of computation has crashed so low nobody canfigure out anything useful to do with it.

So we get compute-intensive hype bubbles designed to fleece investors: cryptocurrency, LLMs, VR/AR, quantum computing (the latter is more compute-R&D-intensive, but follows the pattern).

2
1
0

@cstross @graydon We've got an enormous list of computation problems that we don't have the resources to compute properly. What we don't have is the ways to build the software systems to do those computations because of their complexity.
And any chemist can find a reasonable, grant supported manner to use an arbitrary amount of compute 8)

2
0
1

@cstross @graydon yup.

2023 $150 – $300 (Best Buy and Amazon range)
2005 $4,000 (Sony)
1997 $22,900 (Fujitsu’s first 40″ plasma TV)

Or my fav:

https://en.m.wikipedia.org/wiki/Cray-1

CPU
64-bit processor @ 80 MHz[1]
Memory
8.39 Megabytes (up to 1 048 576 words)[1]
Storage
303 Megabytes (DD19 Unit)[1]
FLOPS
160 MFLOPS

You can’t actually buy a computer this slow, even system on a chips clobber it, you have to go to what is now a microcontroller like an esp32 to get this slow

Basically, the cost of the electricity used to run the cray-1 for 10 minutes will buy you a better computer nowadays.

4
2
0

@etchedpixels
> We've got an enormous list of computation problems that we don't have the resources to compute properly.

The earlier part of that sentence in @cstross's message provides the modifier addressing that point: “the rentiers who invested […] still want the rate of profit they've become accustomed to”.

To those ghouls, the only useful things they'll consider are ones where *they get enormous monopoly profit* from the result. Any result they can't privately monopolise is, ipso facto, not useful — for them.

1
0
0

@bignose @cstross VLSI hardware side in mostly commodity and cyclic.There are a few people making a lot of money making specific leading edge products but they always tend to get eaten away unless they invest massively in figuring out the new.

And if your rentiers are the licensing people then most of them don't go for big margins either because there's a lot of competition. Those that do (like ARM) are getting to learn what competition means.

1
0
0

@bignose @cstross People will always spot a glut of something and try and find a use for it. VR and AR is mostly smaller businesses trying to find what (if anything) works with the new capabilities and discovering that rule 1 applies (porn is the driver of tech). We've got huge amounts of useful AI (not LLMs) from it and lots of other stuff.
Yes most new tech is junk but as the saying goes 90% of everything is shit. Heck people made tablets for 20 years before Apple showed how to do it right.

0
0
0

@etchedpixels @cstross @graydon Is it the _resources_ we don't have, though? Or something else? We're seeing the limits of IT as we currently understand it — I think.

A breakthrough in, say, human-machine interaction, in machines being able to perceive the world and act in it as we do; that's not going to be solved by more computing _power_ but by a breakthrough in techniques. If at all.

1
0
0

@fishidwardrobe @cstross @graydon We are hitting lots of limits - on silicon sizes, on power, on validation (hardware and software), on correctness, on data sets and many more.
A human brain weighs about 1.5Kg, outperforms an LLM and doesn't require a large power plant so there are clearly better ways of doing some kinds of computation (although humans of course suck at many kinds of computation too so it's an iffy generalisation).

2
0
0

@etchedpixels @cstross @graydon I'm not sure the human brain is doing "computation" as we know it, and there's a big problem to solve: "as we know it"

1
0
0

@fishidwardrobe @cstross @graydon If the law of requisite variety is indeed correct, we may well never truly be able to know how we think because it would require something with more state than us to distinguish all the state we have and understand the relationships.

1
0
0

@etchedpixels @fishidwardrobe @cstross We also have a historical tendency to put too much weight on calculation. (That would be "any weight at all.")

We evolved, so everything we can do is an extension of something some other organism can do, and it had to be not-actively-harmful the whole time, back into the deeps of time. We're not calculators. We are plausibly signal processors. (And for the most recent few tens of thousands of years out of those hundreds of millions, we can math.)

1
0
0

@etchedpixels @fishidwardrobe @cstross It's at least decently plausible that "intelligence" is "usually correct reflexes about what to ignore as irrelevant".

Attempting to simulate this is often useful (signal processing is a large and productive field) but it's also really difficult to describe as processing layers start interacting with each other and anybody with any knowledge of biology sighs and expects that it's nothing like that neatly separated into layers in organisms.

0
1
0

@cstross And the current super-computers are beyond anyone's wildest dreams.

0
0
0
@etchedpixels @fishidwardrobe @cstross @graydon Well... I'd not say that human outperforms LLM. Human clearly outperforms LLM when the task is "emulate human", but that's hardly a fair benchmark. If task would be "translate two randomly chosen languages", LLM would already outperform humans. Probably for quiz-style questions, too.
4
0
1
@etchedpixels @cstross @fishidwardrobe @graydon Actually, that might be fun task. Reverse turing test -- human and LLM, where both try convince judge that they are LLM :-).
0
0
0

@kurtseifried @cstross @graydon and yet everything old has become new again—to do modern-day supercomputing you need to know how to rephrase your algorithms in Cray-1 SIMD style so they can be run on a GPU.

0
1
0

@kurtseifried @cstross @graydon I used to tell my students that if cars had dropped in price since 1950 as much as the cost of computation, you wouldn't pay parking tickets, you'd say "keep my car!" Mind you, I was saying this 50 years ago!

0
1
0

@pavel @graydon @etchedpixels @fishidwardrobe @cstross last week ChatGPT 4 placed last in the Paraparaumu Presbyterian Church weekly pub quiz after being the only "contestant" to get every single question wrong.

0
0
1

@cstross @graydon My current pet conspiracy theory is that the broligarchs have drunk their own kool-aid about AI and are pushing for anti-immigration policies in the USA to create a market for more robotics.

3
1
0

@whvholst @cstross @graydon

Someone ought to look at the U.S. population pyramid. We ought to take advantage of the desire people everywhere have to come here.

They don't want to go to Russia or China, two nations that will be crushed by an aging, shrinking population in the next thirty years.

1
1
0

@whvholst @cstross @graydon Well I'd love to see the look on their faces when the murder bots take over while would-be immigrants look on and laugh.

0
0
0

@catch56
Isn't this the guy who said people should do more citizen's arrests of shoplifters so we don't have to employ more police?
@whvholst @cstross @graydon

0
0
0

@Stinson_108 @whvholst @cstross "Those people look funny."

Fascism is a conscious choice to prefer as much violence as necessary—an embrace of nihilistic destruction—in preference to meaningful change. (meaningful = someone loses relative social status) You can't immigrate your way out because immigration upsets incumbency.

1
0
0

@Stinson_108 @whvholst @cstross The broligarchs are faced with the end of the gold rush, not being able to produce those rates of return, and the deep incumbency they don't understand and can't touch. (= where the food comes from). Everybody joining in from "are we poor now?" over the price of eggs ought to be willing to tack broligarch hide to the speaker's chair to get the gini coefficient down, but fearing their neighbours is safer and has much more social support.

1
0
0

@Stinson_108 @whvholst @cstross Nobody is doing quantified analysis because that tells you this is a larger crisis than has previously occurred and we need unity and a complete lack of anything that's ever heard of a profit motive to get through it, should we in fact get through it.

It's pretty easy to find something, anything, more emotionally appealing than that; it's why the main grifts are "when God kills you first it won't hurt" and "you can murder your fears".

0
0
0

@pavel @graydon @etchedpixels @fishidwardrobe @cstross Yes, but the LLM has training data on the subject and the average human doesn't. Find a human who speaks both languages fluently and I'm confident they would do higher quality translation.

1
0
1

@pavel @graydon @etchedpixels @cstross "translate two randomly chosen languages" is also an "emulate human" task. a _random_ human will be bad at that, it's true, but it will use _much_ less resources; and it will be liable if wrong…

0
0
1

@kurtseifried @cstross @graydon and yet no esp32 will ever be as cool as a cray-1

0
0
0
@Insanitree @graydon @etchedpixels @fishidwardrobe @cstross Of course. "LLM has training data on the subject". That was my point.
0
0
0