Get In Touch
108 Wooster Street, New York, NY 10012,
hello@div.digital
Ph: +1.949.254.0339
Work Inquiries
hello@div.digital
Ph: +1.949.254.0339
Back

Software Is Changing (Again): Understanding the Shift to 3.0

Notes from Andrej Karpathy’s Framework on the Future of Development

Andrej Karpathy, one of the most respected voices in AI, has outlined a compelling vision of how software is evolving.

We’re not just in a new phase of tooling or frameworks. We’re in a new paradigm—a transition that parallels the early days of operating systems or web browsers. And as with every shift, the implications for businesses, developers, and users are far-reaching.

This post breaks down Karpathy’s framework into clear, actionable ideas.


The Three Software Paradigms

Karpathy outlines three major “versions” of software development:

1. Software 1.0 – Traditional Code

  • Rule-based logic
  • Written by developers in deterministic languages
  • Rigid but predictable

2. Software 2.0 – Neural Networks

  • Models trained on labeled data
  • Developers focus on datasets and loss functions
  • Black-box behavior, often domain-specific

3. Software 3.0 – LLM Programming

  • Applications are built with and around large language models
  • Natural language is now a primary interface
  • Developers orchestrate context, prompts, and verification steps

We’re at the early stages of this third wave, but adoption is accelerating—driven not by institutions, but by individuals.


LLMs as a New Operating System

Karpathy likens LLMs to the early days of operating systems in the 1960s.

  • Expensive to run: LLM compute is still costly and infrastructure-dependent
  • Central to workflows: When an LLM goes down, it’s not unlike a power outage—certain types of cognitive work stop
  • Consumer-led adoption: Unlike most technology waves, everyday users are leading innovation ahead of enterprise adoption

This shift isn’t just about tooling. It’s about dependency. LLMs are quickly becoming embedded in our digital baseline.


LLM Psychology: Simulated People

Language models aren’t machines that “think,” but they are trained on massive volumes of human text.

The result? Emergent traits that mimic aspects of human psychology:

  • “Stochastic simulations of people” – each output is a plausible echo of how a person could respond
  • Jagged intelligence – excellent at synthesis, weak at arithmetic or spatial tasks
  • Working memory limits – context windows function like short-term memory; they forget what’s not passed in

Understanding these traits is critical to building reliable LLM-based applications.


Best Practices for Building with LLMs

LLM applications aren’t traditional software projects. They require a different mindset.

Key patterns Karpathy recommends:

  • Context management: Handle what the model sees and remembers at each step
  • Orchestration: Sometimes multiple models work together—e.g., one to generate, another to verify
  • Partial automation: LLMs generate, humans verify. Build systems where this review step is easy and fast
  • Visual interfaces: LLM output can be dense. Use GUI layers to simplify interpretation
  • Small increments: Like traditional coding, ship in small chunks and iterate

On the “Decade of Agents”

While many call 2025 “the year of AI agents,” Karpathy urges caution.

He compares it to self-driving cars: the first perfect demo happened in 2013. A decade later, full autonomy remains elusive.

The lesson?

“Software is tricky. Autonomy is tricky. Keep humans in the loop.”

Rather than betting on fully autonomous agents, Karpathy recommends building partial autonomy products with adjustable autonomy sliders—giving users control over how much the AI takes on.


The Vibe Coding Era (We prefer to call it QA-Coding; that’s really what it is)

In Software 3.0, the barrier to entry drops dramatically.

  • Everyone is a programmer—if they can write
  • Prompting is the new IDE—natural language replaces syntax
  • Ops is still the bottleneck—while code gets easier, deployment, infrastructure, and maintenance remain complex

This shift democratizes problem-solving, but also demands new skills: system design, prompt engineering, context management, and a deep understanding of how these models think.


Final Thoughts

We’re in the midst of a software revolution—one driven by human language, guided by probabilistic models, and shaped by entirely new user expectations.

As an agency or product team, now is the time to:

  • Learn the patterns of LLM-native development
  • Rethink interfaces, not just code
  • Build tools that enhance human oversight, not replace it
  • Adopt a slower, more stable cadence to avoid chasing hype

Software is changing—again.

Let’s help shape what comes next.

Let’s talk if you’re exploring how LLMs, agents, or automation can fit into your product or client services.