My journey as a programmer began unexpectedly.
It wasn't through code or coding classes; instead, it was in response to a more profound question – one that has been asked of me repeatedly throughout my life.
As a young person, I've often found myself at the center of conversations, with others seeking my perspective on matters ranging from politics and culture to life-altering decisions. Despite my age, I've frequently been the voice of reason in rooms filled with seasoned adults, teachers, friends, and colleagues.
While I may not have had a clear understanding of why this was happening, one thing became apparent: I possessed an innate ability to make sense of complex ideas and navigate unfamiliar territory. This skillset would eventually become the foundation for my career in programming, as I transitioned from Excel spreadsheets to the world of Artificial Intelligence.
In reflecting on my journey, I've come to realize that this unique experience has not only shaped me into a programmer but also taught me valuable lessons about embracing authenticity and finding structure in an ever-evolving field. As I continue to grow and evolve alongside AI, I'm reminded that true fulfillment lies not in the technology itself, but in understanding its potential to illuminate our humanity.
Years later, I was a government major interning in the California Legislature. My job wasn’t glamorous. It wasn’t tech. It wasn’t strategy.
It was Excel.
Every day, letters came in — stacks of them. Emails. Position statements. Amendments. Testimony. My task was simple: log the mail. Track who sent it. Support or oppose. Extract arguments. Parse legislative language. Distill pages of emotion and persuasion into structured columns:
At the time, it felt administrative. Mechanical. Almost boring.
But looking back, that spreadsheet was my first real experience with coding.
Because coding isn’t mysterious symbols on a black screen.
Coding is structured thinking.
It’s deciding what matters and what doesn’t.
It’s creating consistent fields so chaos becomes searchable.
It’s turning paragraphs into variables.
It’s separating signal from noise.
When I highlighted the pros and cons of a bill, I was performing classification.
When I standardized language across hundreds of letters, I was normalizing data.
When I reduced emotional testimony into bullet-point arguments, I was extracting features.
I didn’t call it machine learning. I didn’t call it data architecture. I just called it doing my job.
But in today’s AI-driven world, those same principles power intelligent systems.
AI does not thrive on chaos. It thrives on clean inputs. Clear categories. Repeatable logic. Consistent formatting. The same discipline I learned logging legislative mail is the discipline required to build reliable machine learning systems.
The spreadsheet taught me:
And maybe that’s why people asked me questions all those years.
Not because I had all the answers — but because I instinctively tried to organize the problem before reacting to it.
I look for categories.
I look for definitions.
I look for the columns before I fill in the rows.
So when I build software now — whether I’m structuring CSV pipelines, designing AI prompts, organizing metadata, or building tools that learn from information — I return to the same principle I learned in that legislative office:
If a human can’t clearly categorize it, a machine definitely can’t.
My coding philosophy didn’t begin in a computer science lab.
It began with questions.
It was refined in spreadsheets.
And in a strange way, the more “boring” the data looks, the more powerful the system becomes.
Because underneath every intelligent machine is something very simple:
A clean table.
A clear structure.
And someone willing to ask the right questions first.
It was only later that I discovered my true passion lay not in the world of politics or social justice, but in the realm of technology - specifically, computer programming. As I delved deeper into the world of coding, I began to realize that this journey was not just about mastering a skillset, but about becoming the person I am meant to be.
This project has two halves that work together. The first is a directory: a clean, structured map of the California Legislature—people, roles, committees, and decisions. The second is a 2026 staff audit, Inside the Golden State: A Data-Driven Analysis of Influence and Inclusion, which uses that structure plus public payroll data to surface patterns of pay, representation, and power. Together, they show what responsible AI in government can look like—and what it cannot.
Modern AI is not an all-knowing brain. It is closer to a statistical inference and deduction engine: given structured data, it finds patterns, estimates probabilities, and generates summaries. It cannot see the future, read minds, or fix injustice on its own. What it can do—when we give it clean inputs—is help humans see the system they are already living in with more clarity.
1. Directory: Structure of the Legislature
Download the CA Legislature Directory (Google Drive)
2. Report: Influence and Inclusion (2026 Audit)
Download “Inside the Golden State: A Data-Driven Analysis of Influence and Inclusion” (PDF)
The directory is organized the way a working legislature operates:
Allen_Benjamin_SD24), committees, and floor sessions.Addis_Dawn_AD30), committees, and floor sessions.Each member folder is built for real work:
Under _Shared/Reference/ you’ll find staff salary data, legislative deadlines, rules, handbooks, district boundaries, and more—the kinds of reference materials that real staff rely on to track what is happening and when.
The 2026 audit takes that structure and asks a hard question: Who actually holds power inside this system? Using staff rosters, payroll data, and a retrieval-augmented AI pipeline, it surfaces a few core findings:
These findings echo the project introduction: this is a transparency audit, not a verdict on any individual. The report is a probabilistic model’s best guess based on public records, meant to move conversations from anecdotes and rumor toward evidence and measurable gaps.
The audit uses a Retrieval-Augmented Generation (RAG) pipeline instead of asking an AI model to “invent” answers.
A single JSON file (staff_data.json) holds the calculated counts, medians, and percentages for each branch, party,
and hierarchy tier. That JSON becomes the source of truth.
This is what responsible AI in government looks like in practice: retrieval first, generation second. Facts are anchored in verifiable records. Narrative is layered on top, not substituted in.
Used this way, AI behaves much less like science fiction and much more like an ultra-fast research assistant that only works with what you give it:
At the same time:
One part of the report looks beyond numbers to a concept called “Enforced Sameness”—institutions that look diverse on paper but quietly punish dissent, direct communication, or different moral vocabularies. In those environments, people learn to self-censor long before formal rules are broken.
AI can either make that problem worse or better:
Fake news thrives where no one can see the underlying tables: who is in the room, who gets promoted, who gets paid what, and which communities are left out of leadership. The combination of this directory and this audit moves in the opposite direction—toward:
The scripts bundled with the directory—PDF extractors, district setup tools, and LLM pipelines—exist for one reason: to turn messy, scattered public records into something you can actually work with. That work is not glamorous, but it is where “AI in government” really lives.
If a human can’t clearly categorize it, a machine definitely can’t. The California Legislature Directory and the 2026 staff audit are built from that premise. They are not a new brain for the state—they are a clearer mirror. Used well, they can help staff, advocates, and the public separate fact from fiction in how power and inclusion actually work in Sacramento.