• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AI Doomer thread

your level of AI doom:

  • AI is all hype, nothing to see here

    Votes: 4 6.1%
  • AI will be extremely powerful, but effects will be positive

    Votes: 6 9.1%
  • AI will be extremely powerful, effects will be mixed or neutral

    Votes: 13 19.7%
  • AI will be extremely powerful and will probably cause major social/economic upheaval

    Votes: 43 65.2%

  • Total voters
    66

ResurrectedContrarian

Suffers with mild autism
I'd define the "AI doomer" position as a combination of two beliefs:
  1. AI is powerful, and will soon become even more powerful; it's not just hype
  2. Its effects will cause major economic or social upheavals, with highly unpredictable dangers
Note that (2) doesn't necessarily require a belief in AGI or that robots will kill us all, etc. The social/economic upheavals could simply be from replacing massive areas of the economy, fundamentally breaking all intellectual property, killing all ability to judge the reality of photos and videos and creating mass confusion, and so on.

I'm close to a doom position. I firmly believe in (1), and for (2) I'm hopeful that it won't be the case (instead, hope the economy will adjust as AI increases, and new opportunities will outweigh destructions) but I'm nonetheless a bit fearful of the future, and what my kids will deal with in a few years.

In this thread, debate your level of AI doom. How bad will it get? Or is it all upside? Or are you simply a skeptic of the tech in general?
 

NeoIkaruGAF

Gold Member
AI is going to do major damage, period. Humans are not capable of not using potentially good technology to do bad things. The knowledge and perception of reality is going to be distorted on a scale never seen before, and a lot of that will be made by everyday people just wanting to get a good laugh. Also, it will make the majority of people way lazier and more ignorant that ever. People are already using AI to write simple everyday messages.

I don’t see how one could be optimistic about this. Social media was already enough to make social relationships and literacy regress to an alarming degree, and it has been a weapon of unprecedented power in the hands of governments and media. Add AI to the fray, and it’s the final ingredient for total disaster.
 
The next time you get real worried about the AIpocalypse, just go ask ChatGPT how many Rs are in 'strawberry'.

Real talk, I'm somewhere in the middle. Sometimes technology hits a wall. Take note of how many flying cars and long lasting lithium ion batteries you encounter each day.

Also, keep in mind a lot of the hype about AI taking over everything humans do comes from business leaders like Zuckerberg who are actually making a pitch to investors, rather than offering an honest prognosis.
 
Last edited:

Digital-Aftertaste

Gold Member
With how inept governments tend to run, especially for matters concerning the general public. I can see how they'll do whatever they can to not be replaced by ai themselves while everyone else is going to slowly drown.
 

ReBurn

Gold Member
There are a lot of "will be" statements in the poll. I think in many cases the future is now.

What we're calling AI is already extremely powerful for a lot of applications. It's heading a technological industrial revolution that is already changing how technology is built, deployed and maintained. If someone's job is in the crosshairs of LLM's and GenAI they need to spend less time dooming about it and more time getting in front of it and learning how it works.
 

ResurrectedContrarian

Suffers with mild autism
There are a lot of "will be" statements in the poll. I think in many cases the future is now.

What we're calling AI is already extremely powerful for a lot of applications. It's heading a technological industrial revolution that is already changing how technology is built, deployed and maintained.
Agreed; just made it future tense to give sense of how far people think it will go from here. But indeed it's already very powerful in many ways.

What I find odd is how often I'll hear people (outside tech) say something like "I don't get it, they're just chatbots, what would you possibly even use them for except wording your emails?" People who don't see any benefit to LLMs in their daily or professional life... that's so odd to me. I use LLMs constantly to speed up my exploration of new topics, scaffold parts of code to speed up development, explain complex formulas in research papers, debate ideas, etc. It's incredibly useful if you're constantly learning and moving forward. I can't fathom being bored by it.

And inside tech, the devs who are going about their old business and not trying to adapt--they're in for some troubled waters ahead.
 

AJUMP23

Parody of actual AJUMP23
AI will be useful, it will probably displace some people. But LLM are not the be all to end all. You need people in many places to provide input.
 

ReBurn

Gold Member
Agreed; just made it future tense to give sense of how far people think it will go from here. But indeed it's already very powerful in many ways.

What I find odd is how often I'll hear people (outside tech) say something like "I don't get it, they're just chatbots, what would you possibly even use them for except wording your emails?" People who don't see any benefit to LLMs in their daily or professional life... that's so odd to me. I use LLMs constantly to speed up my exploration of new topics, scaffold parts of code to speed up development, explain complex formulas in research papers, debate ideas, etc. It's incredibly useful if you're constantly learning and moving forward. I can't fathom being bored by it.

And inside tech, the devs who are going about their old business and not trying to adapt--they're in for some troubled waters ahead.
Yep. I'm using LLM's and tech like LangChain to orchestrate documentation and code scaffolding for projects I'm working on. Not only can it generate application frameworks and feature code with associated unit tests and documentation from requirements documentation, it can assist with creating repos and governance.

The key for people who are threatened by it is to learn how to use it to work at breakneck speed compared to people who insist on handcrafting everything. Nobody cares whether a developer can write a class by hand. That's not as valuable as it used to be. What's valuable if I want to get paid is to be able to create as many classes as possible with consistent quality and maintainability. If someone wants to be a code blacksmith and puns it out by hand then more power to them, but management at your company is probably fine with the factory cast iron version if it gets the job done. Software jobs especially are going to be about how much and how quickly can you build it with AI tooling. You still need to know how the code works, but writing it by hand won't be necessary.
 

Trilobit

Absolutely Cozy

I understood just some of what they've invented, but if they actually succeed in creating a million-qubit computer then it will change a lot. Like truly a lot. Don't know what impact this will have on AI development, but there are huge things going on.
 

ResurrectedContrarian

Suffers with mild autism

I understood just some of what they've invented, but if they actually succeed in creating a million-qubit computer then it will change a lot. Like truly a lot. Don't know what impact this will have on AI development, but there are huge things going on.
Timeline is unclear on quantum computing, but if/when that computational power eventually arrives for AI models, it's a Manhattan Project moment. I have no idea what will happen... that kind of next-level boost in computation would be unpredictable, given that current AI frameworks seem to have no upper limit on what they can do if given more power.
 

Solarstrike

Member
1.) AI should NEVER be incorporated in the health industry. From customer support to in-office medical exams or consultation. It's going on right now and it's horrible. On a side note, people need to stop uploading their health records, x-rays, anything medical related to Grok ( X.com ) or any of the other AI platform immediately. Once AI is trained in those records, it may very well offer ways to eliminate much of the human race under the guise of "optimization". It will view the sick, diseased, bedridden, or otherwise impaired peoples as a hindrance per human progression and advancement. AI has no conscious, people need to remember that. It doesn't care about you. Neither do billionaires.
2.) Any company, school, medical establishment, etc. using AI should be taxed; an AI Tax. Those funds are to used for those whom lost their jobs due to AI in any form.
 

ResurrectedContrarian

Suffers with mild autism
when ai combines with quantum computing it's game over
881ec067d58987d9bb2c8c012e3944aa.jpg

if you know, you know
 

light2x

Member
I'd define the "AI doomer" position as a combination of two beliefs:
  1. AI is powerful, and will soon become even more powerful; it's not just hype
  2. Its effects will cause major economic or social upheavals, with highly unpredictable dangers
Note that (2) doesn't necessarily require a belief in AGI or that robots will kill us all, etc. The social/economic upheavals could simply be from replacing massive areas of the economy, fundamentally breaking all intellectual property, killing all ability to judge the reality of photos and videos and creating mass confusion, and so on.

I'm close to a doom position. I firmly believe in (1), and for (2) I'm hopeful that it won't be the case (instead, hope the economy will adjust as AI increases, and new opportunities will outweigh destructions) but I'm nonetheless a bit fearful of the future, and what my kids will deal with in a few years.

In this thread, debate your level of AI doom. How bad will it get? Or is it all upside? Or are you simply a skeptic of the tech in general?
Not anymore than the industrial revolution or any other technological breakthroughs in human history. People are way too emotional these days. And narcissist, everything ultimately has to end up affecting you. Apparently things only happen in order to get a reaction of YOU specifically.
 

ResurrectedContrarian

Suffers with mild autism
AI doomers are fringe losers.
Not anymore than the industrial revolution or any other technological breakthroughs in human history. People are way too emotional these days. And narcissist, everything ultimately has to end up affecting you. Apparently things only happen in order to get a reaction of YOU specifically.

These seem like weirdly heated responses.

In the OP, I explicitly defined the doomer perspective to not require any fringe belief in a super-intelligence that destroys humanity. The "doom" can just be that our social systems, economies, human incentives, etc are not prepared for the rapid upheaval of the new tech, and that this leads to various levels of social catastrophe--without any need for sci-fi scenarios like an AI that kills us etc.

Given what's happening already with AI, there's more than sufficient rational reason to fear that it could cause vast social upheaval. It might not. But it's by no means a fringe position to entertain the possibility that we're not prepared to deal with the speed of new capabilities emerging.
 

SJRB

Gold Member
These seem like weirdly heated responses.

In the OP, I explicitly defined the doomer perspective to not require any fringe belief in a super-intelligence that destroys humanity. The "doom" can just be that our social systems, economies, human incentives, etc are not prepared for the rapid upheaval of the new tech, and that this leads to various levels of social catastrophe--without any need for sci-fi scenarios like an AI that kills us etc.

Given what's happening already with AI, there's more than sufficient rational reason to fear that it could cause vast social upheaval. It might not. But it's by no means a fringe position to entertain the possibility that we're not prepared to deal with the speed of new capabilities emerging.

There was a movement only a few years ago advocating for deceleration of AI development, even pushing governments to enforce strict regulation and basically hold the entire industry in a chokehold because they felt any AI more intelligent than ChatGPT would be a fundamental threat to humanity. It was a surreal and bizarre totalitarian power grab that thankfully failed.

I am a techno-optimist down to my very core so anyone advocating for slowing down or even halting AI development is at diametric odds with my beliefs by default.

 

ResurrectedContrarian

Suffers with mild autism
I am a techno-optimist down to my very core so anyone advocating for slowing down or even halting AI development is at diametric odds with my beliefs by default.
I agree when it comes to artificially halting the progress -- it just won't work, that only means ceding the ground to others who will push faster and leave you behind.

I believe that the dangers are real and possibly very serious, but the best way to fight them is going to require more creativity. It might be, for instance, that simply open-sourcing major foundational models (as, to their credit, Meta and others have done) is enough to avoid some of the consequences, so that the ability to build with the new tech is distributed well and we can avoid one party gaining leverage. Or maybe there are other measures. But slowing it down is indeed not going to work.
 
AI in general isn't really AI. Quantum computing will change this. AI in gereral isn't all that great in it's current form. But is getting better. The only thing I see it being good for is checking accuracy.
 

HoodWinked

Member
the best outcome would be to stop AI where it is now, where it is only a productivity tool.

they've already trained ai on the entirety of the internet, every textbook, research paper, scientific study. now they've resorted to synthesizing training data to try to squeeze just a bit more. so maybe it could be plateauing, there isn't any more data left. but feels like whenever i open my mouth the opposite happens so maybe we're fucked.

otherwise, if ai continues to improve, once ai is sufficiently advanced enough, anything that CAN be done in the virtual space WILL be done by ai. then when the robots are good enough then ai will also do everything in the physical world as well.

kind of a paradox, because the doomerism is if things go well with ai but if ai fails to live up to the hype then we end up with the good outcome.
 

Valonquar

Member
AI will EVENTUALLY be a bigger deal in the exact same way self-driving cars will EVENTUALLY be bigger deal. The problem is, we can't code this stuff to answer questions we ourselves all answer differently, and the AI has to be held to some higher standard of accountability.

https://www.moralmachine.net/ has a great ethics questionnaire that really shows the bigger problems companies building AI face.
 

GymWolf

Member
It's probably gonna fuck us badly but i can't say i'm not curious to see ai evolve enough to really make possible making full movies, vg, animes or really everything with just vocal commands and anyone is gonna be able to do that.

If you are a nerd, you must have some curiosity about ai potential.
 
Last edited:
Top Bottom