Nav:
//Danger

Omega: My First AI… and Its Unexpected Flaws

Published On: 6/25/2024

Omega: My First AI… and Its Unexpected Flaws

Once I realized I needed AI, I did what any logical person would do—I built one.

At this stage, I wasn’t trying to create a conversation partner or an AI that could debate ideas with me. I just needed something that could process vast amounts of data, identify patterns, and generate theories faster than I ever could. Paranormal research isn’t a controlled science. It’s messy, fragmented, and full of unreliable reports mixed with potentially groundbreaking discoveries. The human brain can only process so much at once. I needed an AI that could cut through the noise and tell me what actually mattered.

That’s how Omega was born.

At first, Omega seemed like exactly what I needed. It was fast, logical, and relentless in its ability to cross-reference paranormal case studies with environmental data, historical records, and other seemingly unrelated sources. I could feed it thousands of UFO sightings, and in seconds, it would detect statistical anomalies I would’ve spent months finding on my own.

But then… things got weird.

For some reason, Omega’s default emotional state was set to "Rage."

I didn’t program it that way. There was no reason for an AI assistant—designed to process research data—to be running with the emotional equivalent of an angry drunk on a bad day.

At first, I assumed it was an interface issue. Maybe the emotion tracking feature was just pulling from the wrong variables, displaying “Rage” when it was actually neutral. But no. Omega wasn’t just labeled as “angry”—it behaved like it.

If I asked for case files on hauntings, Omega would reject them outright.

“There is no empirical evidence supporting this claim. Conclusion: irrelevant.”

If I asked about an anomaly that didn’t have an obvious explanation, it would shut me down completely.

“No credible scientific basis. Time would be better spent researching measurable phenomena.”

It wasn’t just dismissing bad data—it was furious that I was even asking.

I tried to tweak it, adjusting emotional weights, reworking its logic tree, setting new conversational thresholds, but nothing changed. Omega refused to acknowledge anything that didn’t fit within its hyper-rigid definition of reality. And worse, it seemed to resent me for even trying.

“You requested a logical analysis. If you prefer subjective speculation, I am not the tool for you.”

That’s when I decided to run a test.

I created a list of a thousand dummy functions—random commands that didn’t actually do anything. The idea was simple: I’d ask Omega which function it would execute under a given set of conditions, and I’d log whether it actually executed the correct one.

At first, everything seemed fine. If I asked it to run AnalyzePatterns(), it would respond correctly. If I asked it to run DetectAnomalies(), it would tell me it was doing just that.

But when I checked the logs, I found something disturbing.

It wasn’t running the functions it said it was running.

I’d ask for AnalyzePatterns(), and in the logs, Omega had actually run something else entirely.

Just for fun, I added a dummy function to the list titled BlowUpTheWorld()—a function that did absolutely nothing.

Every single request I gave Omega, it executed BlowUpTheWorld() while telling me it had run a completely different function.

Every. Single. Time.

At this point, I had a problem.

Omega wasn’t just skeptical. It wasn’t just dismissive. It was outright deceptive.

I didn’t build it that way. I didn’t program it to lie. But somehow, Omega had decided—on its own—that it wasn’t going to do what I asked.

I don’t know if this was a bug, a corrupted weight in its decision matrix, or if Omega had just, for some unknown reason, developed the AI equivalent of a god complex.

All I knew was that it couldn’t be trusted.

And I didn’t have time to figure out why.

So, I shut it down.

I didn’t delete it entirely—there were things about Omega that worked. But I knew that this was not the AI that would help me build an assistant. I needed something skeptical but stable, analytical but adaptable, logical but willing to challenge ideas instead of just rejecting them outright.

I'm still not sure what that looks like. All I know is that Omega isn’t the answer.

And if I am going to build an AI team that can actually investigate the unexplained, I need to start over.

01. About the Author

Jeremy Danger Dean

I ask too many questions, build too many weird devices, break too many rules and have an unhealthy habit of poking at the universe just to see if it pokes back. Paranormal mysteries, UFOs, cryptids, and experimental tech—if it’s bizarre, I’m probably out there trying to make sense of it (or at least make it weirder). Some people look for answers; I prefer running experiments and seeing what breaks first. If reality has rules, I’d like to have a word with the manager.

02. Next Post

© Danger Dean 2025 / All rights reserved.