From personal assistants to algorithmic influence, AI challenges human autonomy and highlights the need for education, ethics, and regulation.
by Nabajyoti Narzary
Guest Commentary
The trouble with living in 2025 is that the future doesn’t knock politely anymore — it barges in, makes itself at home, rearranges the furniture of our lives, and leaves us wondering when we agreed to let it in.
Guest Commentary

Einstein once argued that education should be about ideas, not just facts.
AI stands in a similar place today. The difference is that this time the machine isn’t confined to a factory or lab. It is intimate, personal, pervasive — helping your child with homework, curating your playlist, reminding you to drink water — and possibly selling those data points to someone you’ve never met. It follows our movements, records our preferences, and learns our habits until it can predict them with disquieting accuracy. We call this “personalization,” but it is really a mirror showing how predictable we’ve become. Free will, so cherished as a human ideal, begins to resemble a carefully staged performance in which the lines are gently suggested by algorithms. The printing press gave control of ideas to the many; AI could reverse that, shifting influence back to those who design the systems. If free will was ever a pristine thing, algorithms now have smudged the glass. Nietzsche declared “God is dead,” and humanity took his place. Now our own creations, “powered by code instead of commandments,” test how it feels to dethrone us. Its impact on education shows its paradox. Teachers admit that essays, lab reports, and even poetry assignments arrive in prose too polished for a sleep-deprived teenager. The deeper question isn’t cheating; it’s the gradual outsourcing of thought. If a machine can generate a flawless answer in seconds, what exactly is the student supposed to learn? Einstein once argued that education should be about ideas, not just facts. In the AI age, that principle is urgent: machines will always store and retrieve better than we can; what they cannot do is cultivate judgment, empathy, and context. Those are the skills education must protect.
Left entirely to the pursuit of profit, it will entrench the disparities it claims to solve.
Half-literacy in the digital age is more dangerous than illiteracy ever was. A person who can read but cannot discern misinformation, or who can navigate a device but cannot question its intent, is more vulnerable than one who lacks access. As AI advances, truth and fabrication will blur with greater sophistication. The challenge will not be finding answers but knowing which questions are worth asking. What makes AI’s rise feel different from earlier technological revolutions is its intimacy. We don’t just use it; we confide in it. Chatbots have become companions to the lonely, brainstorming partners to the overworked, and sometimes more rewarding than speaking to another human. The machine never interrupts or takes offense. But comfort has a price. Time saved is rarely spent on rest or reflection; it is reinvested into more screen time, dependence, and anxiety about being left behind. Meanwhile, tech companies frame this as empowerment — “democratizing knowledge,” “upskilling communities,” “bridging the digital divide.” Sometimes these initiatives are genuine, other times akin to the colonial “free railways” — convenient for the empire, less so for the colonized. The danger isn’t only in surveillance or job loss. It is in the erosion of inefficiencies that make us human. Progress is messy, contradictory, full of detours. A society optimized to perfection may function better, but it would lose the unpredictability that sparks art, discovery, and change. We now see the appetite for ranking people with algorithmic “merit scores” — a digital caste system where privilege and productivity are weighed and tagged. The Gold Class gets the plum opportunities; the Bronze Class is told it’s still “included” while quietly excluded from anything that matters. Technology, we’re told, is the great equalizer. In practice, it magnifies the inequalities it claims to erase. Facebook’s “Free Basics,” meant to connect the unconnected, was accused of enabling propaganda and deepening divides. AI could do the same — faster, more precisely, and harder to catch in the act.

Photo: Markus Spiske/PEXELS
AI is a powerful tool with the potential to expand education, healthcare, and access to marginalized voices, but unchecked profit-driven use could worsen inequalities. The key is cultivating wisdom to guide it through strong regulation, public literacy, and discernment about when machine learning is appropriate. Ultimately, its impact depends on how responsibly society steers its development.
Yet to see AI only as a threat is to miss its potential. Like the printing press, it is a tool, not a destiny. Used with transparency, accountability, and imagination, it could extend education to the remotest villages, deliver healthcare to those without doctors, and give voice to silenced communities. Left entirely to the pursuit of profit, it will entrench the disparities it claims to solve. The real question is whether we can cultivate the wisdom to steer it. That means regulation as ambitious as the technology, public literacy campaigns beyond “how to use” guides, and the humility to admit that not every problem needs a machine-learning solution. Human history is a long conversation with our inventions. At first, they astonish us. Then we adapt. Eventually, we forget who began the conversation, and the creation becomes background, like wallpaper we no longer notice. The printing press, steam engine, light bulb — each began as a wonder and ended as something ordinary. AI will follow the same arc unless we choose otherwise. What feels extraordinary today will be mundane tomorrow, but in this brief in-between moment, we still have the chance to decide the terms of our partnership with it. The future is shaped not only in public breakthroughs but in what we accept, automate, and what we stop questioning. If we surrender those choices to the machine, it will not need our consent. It will keep speaking long after we have stopped listening.
Nabajyoti Narzary works in administration, where he explore the intersection of people and institutional systems at the grassroots level, uncovering untold stories of governance and everyday resilience. Writing is his sanctuary, flowing from daily observations and reflective moments, often captured in a personal diary and complemented by long evening walks with their dog, Nia. A college trip to Serbia sparked a lasting interest in Eastern European culture and history, inspiring a deep appreciation for the region’s complex tapestry shaped by centuries of conflict, coexistence, and cultural evolution.