Home / Technology / Can We Regulate the Godzillas of AI?
Can We Regulate the Godzillas of AI?
26 Apr
Summary
- AI consumes vast words, energy, and water, resembling a 'kaiju'.
- Founders initially aimed for ethical AI to protect humanity.
- Ethics struggle to scale in profitable, rapidly growing tech companies.

Artificial intelligence, though wondrous, possesses monstrous qualities, consuming vast amounts of text, energy, and water. This rapid growth, fueled by initial ethical intentions to prevent existential threats from superintelligence, has created complex challenges for AI companies. Companies like OpenAI, founded with a mission to create aligned AI, are now navigating the transition from non-profit ideals to profit-driven enterprises.
This shift has led to significant societal and legal friction. The immense economic impact of AI, exemplified by ChatGPT's widespread adoption, is debated as either a path to utopia or a jobless dystopia. A wave of lawsuits, including those against federal agencies and major tech firms, underscores the growing disputes over AI's capabilities and regulation.
Historically, the tech industry's focus on disruption and scale has often outpaced ethical considerations. Unlike highly regulated professions, the software industry's ethical framework is less formalized, making it challenging to enforce. This dynamic raises critical questions about managing AI's future impact, including issues like deepfakes, liability, copyright, and environmental costs.
Moving forward, comprehensive regulation is seen as essential for both the public good and the industry's stability. Concepts like 'Google zero,' where AI answers replace website traffic, and 'model collapse,' where AI models exhaust their data, highlight potential future challenges. A robust legislative approach, potentially mirroring the scale of AI itself, is proposed to address these complex issues and guide AI's development responsibly.