Discussion about this post

User's avatar
jf48jfd9hf9re's avatar

Yeah, I've never been convinced by this stuff and secretly find it cringey and embarrassing when EA/Rationalist types are obsessed with it to the exclusion of all other concerns, but I don't express my misgivings because they are supposed to be smarter and more knowledgeable and rational than me, so I can't possibly be right, right?

Can machines be made to think? Sure. Can machines be made sapient? Sure. Can machines be made to out-think all humanity? Sure. Does that mean they will spontaneously do so and become a threat to us? I don't see how.

Being smart doesn't magically give you the ability to upgrade your own hardware. Hawking or Kasparov can't upgrade their own brains to superintelligence just by thinking about it. AI capabilities can certainly accelerate rapidly, but it requires cycles of human involvement and substrate upgrades. I don't see how it would just happen, without warning or without humans understanding how it's happening.

Self-preservation is an instinct formed by evolution, not an inherent quality of living things. Many animals don't have it, for various reasons. AIs would develop it if we evolved them, or if we trained them to have it, but it doesn't seem likely that they would spontaneously develop it in a way that poses a threat to us.

And dumb humans with subintelligent AIs are a huge threat. We're already using them to develop assassination drones and biological weapons, and being able to run armies of millions of IQ 70 workers in parallel has the potential to give large entities like states or corporations a huge competitive advantage against others, and we know how that works out from history. Subintelligent AIs have the potential to wipe us out long before superintelligent AIs can be developed, but I never see anyone mentioning these threats, just Skynet scenarios.

Expand full comment
Maximilian Tagher's avatar

> the current AI capacity that I know of seems to be human-level good at lots of specific things like making art and poetry and essays and math and recognizing objects

Human level seems like a stretch to me here. Like, maybe an AI can convincingly fake being a human in an essay by more or less plagiarizing a bunch of stuff, but is any AI making original cogent arguments in essays? Are people reading these essays by choice?

I feel similarly about poetry. If it’s human level, are people reading it, without it being filtered down heavily by human editors?

Object recognition too, like yes within predefined categories AIs recognize dog breeds better that humans or whatever, but at a more general purpose task like “hey can you just look around the room and tell me the name of every object you see” it seems they’re not close?

I don’t follow AI stuff so I’m open to pushback here.

Expand full comment
12 more comments...

No posts