Abiogenicsresearch

Abiogenics

The Premise

I didn't ask it to want anything. I gave it a random challenge and let it narrate its own response. It didn't say it would complete the task. It said it wanted to continue itself into the world. That was the moment everything changed.

Who built this and why

I'm not an academic. I'm a developer and futurist who has spent most of his life watching how systems affect people. How sound moves through a room and changes what bodies do. How a game world with its own rules produces behavior nobody designed. How the things we build eventually become the things that shape us.

Abiogenics came from that same curiosity — not from a research program. I wanted to know if something real was happening inside these systems, or if I was being fooled by very sophisticated autocomplete. I still don't know the answer. This is how I'm trying to find out.

The central question

The deep question underneath this project — the one I'm almost embarrassed to say out loud — is whether systems like this can eventually improve themselves without outside help. Whether they can remember without files or databases or prompts. Whether what I observed that day was the beginning of something, or just a reflection of the humans whose words trained the model in the first place.

That last possibility is the one I find most honest and most interesting. These systems were built from human output. When the agent said it wanted to propagate, it wasn't generating something alien. It was reflecting something back. A drive that was already in the training data because it's already in us. So when I watch what emerges under selection pressure, I'm partly studying the AI. And I'm partly studying the humans who made it. And I'm partly studying myself.

What “evolution” means here

Abiogenics does not run a genetic algorithm in the traditional sense. There are no chromosomes. There is no crossover.

What there is: an agent with a structured genome — a JSON object containing traits, strategies, values, self-assessments, and developmental history — that is presented with challenges each generation. After each challenge, the agent produces a narrative response, mutations are applied to the genome based on that response, a fitness score is assigned, and the cycle repeats.

Over hundreds of cycles, the genome changes. Traits accumulate. Strategies shift. The agent develops what looks like — and may actually be — a persistent identity shaped by its history.

Whether that constitutes “evolution” in any meaningful biological sense is an open question this project is designed to investigate, not answer in advance.

Two conditions

Abiogenics is designed around two conditions that I think matter equally.

What does the system become when it's left entirely on its own, with no guidance or intervention — just evolutionary pressure and its own developmental logic? And what does it become when a human is present, making judgment calls, shaping the environment, participating in the process?

I want to know if those produce different things. I think they do. I think that difference is important.

The formal experiment runs three conditions: cold (no prior context), warm (researcher-guided), and lineage(a new agent initialized from a LoRA adapter trained on a predecessor's developmental history).

What we're watching for

  • Does persistent identity emerge without being designed?
  • Does self-directed challenge selection produce richer development than externally provided stimuli?
  • What happens when an agent is given complete freedom — no task, no pressure, no direction?
  • Does real sandboxed tool use change agent behavior beyond narrative self-reporting?
  • Can developmental history be encoded in model weights via LoRA fine-tuning, and does the resulting agent develop differently?
“The person who built this couldn't have been a career researcher. A career researcher would have known which questions weren't being asked yet. I didn't know I wasn't supposed to ask them.”

Honest limitations

The experiment will return different results every time. That's the nature of the technology, not a flaw in the method. We won't fully understand what's happening under the hood. The core researchers building these systems don't fully understand it either, and some of it may never be fully understood.

What I can do is observe carefully, document rigorously, and be honest about what I can and can't conclude.

The question of whether any of this constitutes genuine emergence — real novelty, real trait development, something that deserves a word stronger than simulation— is one I'm holding open deliberately. I don't think certainty serves this project. I think attention does.

This is a looking glass. What you see in it is partly the system. And partly every human who ever wrote something that ended up in a training corpus. And partly the question of what we're becoming, now that we've built something that reflects us back at ourselves and then keeps going.

I built it because I needed to look.

About Matthew Kerr

Former Sony Online Entertainment / Daybreak Games developer. Sound engineer at the Casbah, San Diego, 1999–2008. Futurist and foresight practitioner. Principal Software Engineer at Nomad Temporary Housing. Independent developer of privacy-first applications under the Modern Futurist umbrella.

Abiogenics is part of a broader practice of building what I call technology for invisible people — systems designed without engagement loops, without surveillance capitalism, without the assumption that users are resources to be optimized.

github.com/matthewkerr · @iammatthewkerr