@reeshuffled on Github

The Moral Objectionability of LLMs

Stub • 413 Words • Artificial Intelligence, Philosophy • 03/07/2025

LLM = Large Language Model (a subset of Generative AI)

  • The following arguments may apply to Generative AI as a whole, but I mostly had ChatGPT in mind when writing this

I think that it’s okay to be snobbish toward AI usage, but you have to be on firm argumentative ground lest someone critique you on being hypocritical.

natural resource usage

  • all electronic actions take electricity, but LLMs do use a lot of power
    • if we accept that power usage is inherent from electronic actions, then the charge is more toward the fact that there exist many alternatives to using LLMs for your work (it is unnecessary/frivolous)
    • what about if they were using a high percentage of renewable energy? does that make it better?
  • a lot of people talk about the hardware that powers LLMs, but it doesn’t just instantly evaporate water
    • water is used for cooling LLMs but that is recycled and other materials could be used instead of water for thermal interchange

theft of intellectual property

  • mass scraping of the internet with little to no credit given with lots of profit made built off of stolen work
    • a common counter-argument is that people also learn from copyrighted content without paying but that isn’t always necessarily true, we might buy a book or movie and learn from that
    • even if we didn’t pay, it is on a far less scale and humans typically have an intention of differentiation to make sure they aren’t too heavily infringing/copying on existing works
      • the scale at which it is done is beyond anything a human has done or ever could do

the deadening of minds

  • usage or reliance on LLMs may make you a worse critical thinker
  • through usage in academia, students may never develop certain skills that they otherwise need or would have learned
    • those students might not have learned anyway? but the ease of use makes the analog students work even harder so they are potentially losing out on the path of least resistance by not using LLMs

slop overload

  • the low effort or poor quality creations of AI can be created so much quicker and cheaper than quality content so it beings to flood social media or marketplaces so that it crowds out actual things
    • this can overload academic peer review, twitter replies, pinterest, etc.
  • crowding out of smaller gig workers or devalues creative work
  • allows proliferation of misinformation

Other Artificial Intelligence Posts

Ghibli-fication and AI "Art"

A lot of so-called "AI Art" might be style transfer, which I argue is not art at all.

AI Art and the Intention of Differentiation

Introspecting on the difference between AI and human art, specifically in taking input and producing output.

You Won't Get Left Behind

On AI hype and why you can just watch it ride out for now.

Other Philosophy Posts

Hermeneutic Labor

Neuroscience, Freewill, and Moral Responsibility

What does neuroscience have to tell us about free will?

Teaching Children Ethics

How can we teach children ethics in developmentally appropriate milestones?


Comments