December 29, 2024. Who Decides What’s Right?.


It's 3 AM, and I'm staring at a line of code that's been haunting me for hours. The function works—it does exactly what it's supposed to do. But something about it feels wrong. Not technically wrong, but ethically wrong. It's optimized for efficiency, follows all the best practices, and would probably get approved in any code review. Yet, it makes an assumption about how users should behave, subtly forcing them down a predetermined path.

This might sound like the overthinking of a sleep-deprived programmer (guilty as charged). But it connects to something deeper, something that’s been gnawing at me since my last reflection on the death of individuality. Who gets to decide what's "right"? In code, in systems, in society—who makes these decisions, and what gives them that authority?

In engineering school, we learned about optimizations—how to make systems more efficient, more reliable, more "correct." But every optimization carries an implicit bias: someone's definition of what matters most. When we optimize for efficiency, we’re saying efficiency trumps all. When we optimize for scale, we’re placing growth above everything else. It’s rarely explicit, yet these choices ripple out into the world in ways we can’t always predict.

Last week, I was working on a user authentication system for Abdi & Brothers Company. Standard practice dictates locking accounts after a certain number of failed attempts. It's "right" from a security standpoint. But then I thought about my grandmother, who sometimes forgets her passwords. Would she end up locked out, frustrated, and excluded? Who decided security should take precedence over accessibility in this scenario? And why?

I’m reminded of a moment in my quantum mechanics class. We were studying the Copenhagen interpretation—a standard way to understanding quantum mechanics. When I asked why alternative interpretations weren’t explored, my lecturer responded, “Science isn’t about finding absolute truth. It’s about finding useful models.” That hit me hard. Even in physics, what’s “right” is just what’s useful for a given purpose. Newtonian physics isn’t more “right” than quantum mechanics—it’s just more useful at certain scales.

But in human systems, what’s useful for some can be harmful to others. In physics, we test models against reality. If a bridge stands, if a circuit works, if a rocket flies—that’s empirical validation. But in social systems, in digital platforms that shape human behavior, how do we test if we’re “right”? By what standard? Who gets to decide?

Think about social media. Someone decided engagement was the right metric to optimize for. That decision, likely made in some Silicon Valley conference room, has reshaped global human interaction. Was it right? For whom? And by what measure?

These questions aren’t academic to me. As I build Abdi & Brothers Company, every line of code, every design decision embeds assumptions about what’s right, what’s better, what’s valuable. Yesterday, I was working on our recommendation algorithm. The standard approach is to optimize for user engagement—to show people more of what they already like. It’s "right" by current best practices. But after writing about individuality, I couldn’t help but question it. What if we optimized for discovery instead? What if we introduced randomness, serendipity, challenge? From an engineering perspective, it feels wrong—it’s inefficient, unpredictable, harder to scale. But maybe that’s exactly why it’s worth considering.

In quantum mechanics, Heisenberg’s Uncertainty Principle teaches us that trying to measure one property precisely means losing precision in another. What if human systems work the same way? By optimizing too precisely for one definition of "right," do we inevitably lose something else that’s valuable? I think about the innovators who inspire me—people like Tesla and Einstein. They didn’t just solve existing problems better; they questioned the fundamental assumptions about what problems were worth solving. They dared to propose different definitions of “right.”

This is the challenge I’ve set for Abdi & Brothers Company. We’re not just building another platform. We’re questioning the assumptions about how digital systems should work. Should they optimize for efficiency or human flourishing? For scale or depth? For consistency or diversity? The engineering mindset in me craves clear answers, definitive metrics. But maybe the most ethical thing I can do is resist that urge and embrace the messiness of multiple truths.

I’m reminded of something from my control systems class: the more precisely you try to control a complex system, the more brittle it becomes. Maybe the same is true for human systems. Instead of deciding what’s right for everyone, maybe we should build systems that allow for multiple definitions of “right” to coexist and evolve. This isn’t about moral relativism or abandoning standards. It’s about recognizing that in complex human systems, "right" is often contextual, personal, and evolving.

So here I am, rewriting that function. Instead of guiding users down a predetermined path, it will offer choices, explain trade-offs, and respect agency. It’s messier, harder to maintain, less "efficient." But maybe that’s exactly why it’s right. Because in the end, who decides what’s right? Maybe that’s the wrong question. Maybe the real question is: How do we build systems that empower people to discover their own answers to that question?

As I build Abdi & Brothers Company, this principle guides me: We don’t have the final answer about what’s right. But we can create spaces where different answers can emerge, evolve, and coexist. Maybe that’s the most right thing we can do. 

What do you think? In your world, who decides what’s right? And more importantly, what gives them that authority?