Saturday, February 28, 2026

BURNING CHROME | MiniDisc: An almost forgotten favorite

Originally posted on: techsabado.com/2025/11/08/burning-chrome-minidisc-an-almost-forgotten-favorite/

In the crowded museum of audio history, the MiniDisc stands like a stubborn exhibit that refuses to gather dust. Launched in 1992, Sony’s answer to the compact cassette promised a future of sleek portability and digital convenience. The disc, enclosed in a protective plastic shell, seemed ahead of its time. It resisted scratches and dust, held music encoded through Sony’s ATRAC compression system, and could be re-recorded endlessly without degradation. For a while, it looked like the perfect format to bridge the gap between the tactile charm of analog and the clean efficiency of digital audio.

Sony envisioned the MiniDisc as the next step after the cassette Walkman, a product that had already defined a generation of music listening habits. Unlike CDs, which were fragile and prone to skipping in portable players, MiniDiscs had built-in buffering that made them shock-resistant. They could be slipped into pockets or tossed into backpacks without fear of scratches. A student on a crowded Tokyo train could carry hours of music without lugging around a tower of discs. A field journalist could record interviews on a device smaller than a paperback novel, then erase and reuse the same disc indefinitely.

The rise and the stall

The MiniDisc found enthusiastic acceptance in Japan, where Sony’s influence was strongest, and in parts of Europe where consumers valued its mix of portability and durability. For journalists, it became a reliable field recorder. For musicians, it offered a new way to capture demos without the fragility of DAT tapes or the noise of cassettes. But in the United States, one of the largest consumer of electronic products, the format never found its footing. Compact discs were already entrenched, and by the late 1990s, CD burners and recordable CD-Rs undercut one of the MiniDisc’s main advantages: the ability to record easily at home.

Sony kept iterating, introducing features that tried to extend the MiniDisc’s lifespan. In 2000, MiniDisc Long Play allowed discs to store more hours of music through tighter compression. NetMD followed, promising a digital bridge between computers and MiniDisc units, although Sony’s clunky, restrictive software frustrated users who had grown accustomed to the free flow of MP3s. Then came Hi-MD in 2004, which boosted storage to a full gigabyte and, more importantly, allowed recording in uncompressed PCM audio, finally giving purists the CD-quality sound they had been demanding.

But technology waits for no one. By the time Hi-MD arrived, the iPod from Apple had already begun reshaping how the world consumed music. The promise of carrying an entire library in your pocket, with no moving parts or blank media to purchase, was irresistible. Streaming services would later finish the job, rendering physical music formats optional at best. Sony officially stopped producing MiniDisc players in 2011, and in early 2025, the company announced it would cease manufacturing recordable discs altogether.

The cult of the MiniDisc

Yet MiniDisc remains alive, not in the mainstream, but in the cracks of modern audio culture where nostalgia, ritual, and creative curiosity thrive. Among collectors and music obsessives, it has earned a place beside vinyl records and cassettes as a format that represents more than just playback—it represents an experience.

To slide a MiniDisc into a Walkman or a deck is to engage with music deliberately. Each disc can be labeled, tracks renamed or reordered, and recordings trimmed or spliced directly on the device. Unlike the infinite scroll of Spotify, the MiniDisc imposes a kind of discipline. You think about what you want to record, how you want to name it, and which takes are worth keeping. It forces curation in a world drowning in algorithmic abundance.

Communities of enthusiasts still trade blank discs online, repair aging devices, and even release new albums in MiniDisc format. Independent labels, especially in experimental and electronic music, issue limited-edition runs on MD as a kind of boutique collectible. The appeal lies not just in the sound, but in the tangible object: a palm-sized cartridge that embodies care, intention, and permanence.

The MiniDisc’s durability and portability are still unmatched. The cartridge design makes it practically immune to scratches, a problem that has plagued CDs since their inception. The anti-skip buffer makes it reliable for live performance playback, field recording, or simply walking with music in your pocket. The ability to re-record over the same disc thousands of times without signal degradation gave it a practical edge over both cassettes and CDs.

In the studio, Hi-MD units such as Sony’s MZ-RH1 became indispensable tools for musicians who needed portable PCM-quality recording. Producers and sound designers used MDs for capturing rehearsals, field sounds, and demo takes. Tascam and TEAC released studio decks that integrated MiniDisc recording with CD functionality, making them reliable workhorses in project studios. The format’s editing features—splitting, combining, and erasing tracks without touching a computer—remain a kind of quiet magic that most modern gear still doesn’t replicate in such a portable form.

But the downsides have always been part of the story. ATRAC compression, while innovative, left artifacts that some listeners never forgave, especially in long-play modes that sacrificed fidelity for capacity. Sony’s decision to lock users into proprietary software during the NetMD era alienated many at the exact moment when MP3 culture was exploding. Today, the scarcity of blank media and the rising cost of secondhand players make MiniDisc a difficult habit to maintain. Devices break, batteries degrade, and with Sony pulling the plug on new disc production, the supply chain is hanging by a thread.

The collector’s market

Despite these challenges, demand for top-tier units has surged on the resale market. Sony’s MZ-RH1, the final flagship portable recorder, is treated like holy relic among enthusiasts. Earlier models from Sharp and Panasonic are also sought after for their design quirks and unique sonic qualities. Studio decks from Tascam and Denon still find homes in project studios, where engineers prize them for their reliability and integration with analog setups.

Blank discs themselves have become collectible items. Early translucent Sony designs, special editions with metallic shells, and even no-name brands are traded online like rare vinyl pressings. The scarcity has created a market where enthusiasts willingly pay premiums to continue feeding their players, underscoring just how strong the devotion to the format remains.

For many, the MiniDisc’s revival is about more than sound quality. It represents an intentional way of listening and recording in an age of endless convenience. Much like vinyl, which reemerged from obsolescence to become a billion-dollar industry again, MiniDisc appeals to those who crave tangibility. It asks you to slow down, to engage with music as a physical artifact rather than a fleeting stream.

In studios, the format still has creative value. Some producers use ATRAC’s compression as a sonic filter, giving recordings a distinct character. Field recordists appreciate the ruggedness of Hi-MD units in situations where laptops or flash recorders feel fragile. Even if not the main recording medium, MiniDisc can serve as a dependable backup or as a creative tool in hybrid digital-analog workflows.

The question now is not whether MiniDisc will make a mainstream comeback—it won’t—but whether it can persist as a niche format sustained by communities of enthusiasts. As Sony phases out blank media, the future depends on the dedication of users who continue to refurbish machines, trade supplies, and advocate for the format’s unique virtues. Independent labels may keep pressing limited MD releases for collectors. Hobbyists might hack together new software tools to improve PC integration or even experiment with manufacturing compatible blank discs.

What seems clear is that MiniDisc has crossed into the territory of cultural artifact. It belongs to the same family as vinyl and cassettes: obsolete by commercial standards but treasured by those who see value in its tactile, deliberate rituals. For musicians and listeners who still use it daily, it remains not just a format but a philosophy—a refusal to surrender entirely to the frictionless convenience of the cloud.

The irony is that MiniDisc, once dismissed as a failed format, now thrives precisely because it failed. In a music industry dominated by the intangible, its survival depends on being physical, scarce, and resistant to the stream. Each disc, each label scrawled in pen, each session captured on ATRAC or PCM becomes part of a story that the format’s devotees are determined to keep telling.

In the end, the MiniDisc may never reclaim its place in the mainstream, but that was never the point. Its value lies in its stubbornness, its resilience, and its ability to inspire loyalty decades after its commercial death. For those of us still sliding in discs, pressing record, and watching the tiny display flicker with life, the format is not dead. It is simply underground, and perhaps that’s where it was always meant to be.

Originally posted on: techsabado.com/2025/11/08/burning-chrome-minidisc-an-almost-forgotten-favorite/

Saturday, February 21, 2026

BURNING CHROME | Every 'tech bro' wants to rule the world

by Jing Garcia -- because the mind is a terrible thing to taste.

Originally posted on: techsabado.com/2025/11/01/burning-chrome-every-tech-bro-wants-to-rule-the-world/

Silicon Valley in California was long romanticized as the crucible of innovation, a place where idealists and engineers imagined transforming the world through code, connectivity, and creativity. The "tech bro", once slang for overconfident young men in startups, has evolved into something more ominous: a cultural and economic class whose power extends into politics, surveillance, and ideology—and whose agenda increasingly threatens democratic norms, privacy, and equality.

The term describes more than a programmer with swagger. It represents a cultural mindset: that technological progress is intrinsically good; that disruption excuses almost anything; that wealth validates value; and that labor rights, privacy, and government regulation are inconveniences, not protections. This identity is rooted in libertarian ideals but hardened by neoliberal economics, promoting the belief that private innovation should override public accountability. As one Programmable Mutter analysis put it, Silicon Valley elites often assume they are “exempt from rules that apply to everyone else.”

Suddenly, Silicon Valley is right... leaning right

Over recent years, many tech leaders (including those in the so-called 'Magnificent Seven' in big tech)—once associated with liberal values of openness and diversity—have tilted toward conservative or reactionary positions.

One reason is structural. The neoliberal order that nurtured tech’s rise—deregulation, light taxation on capital gains, minimal labor protections—fits the interests of tech entrepreneurs (at least in the U.S.). When global progressive movements demand oversight, antitrust enforcement, or stronger labor rights, tech elites view these as existential threats to their business model. A 2022 Data & Society essay described this as a shift from “disruption as innovation” to “disruption as political self-defense.”

There is also ideological capture. As Luc Lalande argued in Medium, Silicon Valley’s culture of “tech exceptionalism” fosters the idea that regulation hampers progress, rather than protecting society.

Finally, with concentration of wealth and influence, tech giants are no longer just selling devices—they shape policy, law, and media narratives. Platforms now control what speech is amplified or suppressed, often with minimal transparency. The political leanings of some high-profile investors and executives, widely reported in outlets such as Newsweek, illustrate how personal ideology can intersect with platform power to influence the broader discourse.

Here's the thing, the agenda of tech bros is not hidden. It revolves around four priorities:

- Protecting and expanding their control over user data and algorithms.

- Reducing regulation on privacy, competition, and labor.

- Influencing policymaking in ways that shield their industries.

- Promoting the ideology that entrepreneurship and innovation alone can solve deep social problems.

As Shoshana Zuboff argued in The Age of Surveillance Capitalism, this worldview treats individuals as raw material for behavioral prediction, commodifying human experience itself.

In fairness, but still...

To be fair, technology born of this culture has improved lives: instant communications, telemedicine, digital payments, and online platforms for marginalized voices. In the Philippines, e-wallets like GCash and Maya have expanded financial access, and e-commerce platforms connected small sellers to national markets.

But the cons are undeniable. Gig-economy apps have blurred labor rights. Algorithms curate what we see without accountability. Data centers and crypto operations consume vast energy. And monopolistic tendencies mean a few companies dominate the digital public square, determining which voices are heard.

Nowhere is this clearer than in privacy. Every click, location, and preference is harvested for targeting. In our country, The Philippines, the rollout of the SIM Registration Act, meant to combat scams, has sparked concerns about data protection and potential leaks. The National Privacy Commission (NPC) has repeatedly reminded telcos and platforms to secure sensitive information, yet breaches continue to make headlines.

Globally, predictive policing tools and facial recognition systems—covered by The Guardian and others—demonstrate how technology designed for efficiency can slip into surveillance. Locally, CCTV expansion tied with AI-based analytics in cities like Manila and Makati raises similar questions: who watches the watchers, and who benefits?

Do you feel safe?

The blunt answer is that users are vulnerable. From manipulation during elections to algorithmic bias in lending apps, harms are not hypothetical—they are happening. Legal scholar Frank Pasquale, in The Black Box Society, warned of opaque systems creating “zones of automated impunity.” That warning is relevant in Manila as much as in Mountain View, a city at the heart of Silicon Valley.

While the European Union’s GDPR provides some safeguards, our country's data protection laws remain patchwork and underfunded. The NPC’s enforcement powers are often limited by resources. In a country where mobile penetration is high but digital literacy uneven, citizens are doubly exposed.

The tech bro agenda is inseparable from neoliberal capitalism. Neoliberalism champions deregulation, privatization, and shifting risk onto individuals. Tech thrives in this environment: light taxes, permissive labor policies, and open global talent markets. Yet, as political economist David Harvey observed, neoliberalism is not about shrinking the state—it is about reconfiguring state power to favor capital.

Again, in the Philippines, we see this clearly: public universities produce the tech workforce, government builds the digital infrastructure, yet much of the profit flows to private companies. E-commerce thrives on logistics networks partly subsidized by state programs, but gig workers shoulder the risks without benefits.

Should we follow their lead?

To simply follow the tech bro blueprint would be reckless. Their vision privileges profit and efficiency over democratic accountability. The risks—eroded labor rights, weakened privacy, algorithmic injustice, and democratic backsliding—are too severe.

Fighting back means asserting democratic control of technology.

Regulators like the NPC must be empowered with stronger enforcement powers and resources. Antitrust actions, similar to those now happening in the United States and European Union, should be explored to curb monopolistic platforms locally.

Workers in the tech industry—whether coders in BGC, riders in Cebu, or call center agents in Davao—must organize and demand ethical standards and labor protections. The Trade Union Congress of the Philippines has begun conversations on digital labor rights, but more is needed.

Public awareness is key. Digital literacy programs, such as those piloted in barangays by local NGOs, must be scaled up to help users understand how data is collected and used.

Alternatives matter, too. Cooperative platforms, open-source projects, and publicly owned networks could give citizens more control. Some LGUs are already experimenting with community Wi-Fi and municipal digital services—small steps toward de-privatizing the digital commons.

Finally, cultural work is essential. As Jacobin magazine noted, Silicon Valley sells us the myth of heroic disruption while masking its dependence on neoliberal policies. Filipinos must resist the glorification of profit-driven disruption and insist on narratives of solidarity, accountability, and care.

The tech bro phenomenon is not just about overhyped gadgets or brash personalities online. It is a political economy project: less oversight, more concentration of power, and a society redesigned around data extraction. If unchecked, it will deepen inequality and weaken democracy.

For Filipinos, this is not abstract. It is about how our data is used, how our workers are treated, how our elections are shaped, and how our daily lives are mediated by algorithms. The fight for rights—online and offline—means refusing to be passive consumers. We must act as citizens, demanding accountability and imagining alternatives.

If we surrender to the tech bro agenda, we will inherit a future built for their profit, not our freedom.

Originally posted on: techsabado.com/2025/11/01/burning-chrome-every-tech-bro-wants-to-rule-the-world/

Saturday, February 14, 2026

BURNING CHROME | From hype to reality: Where AI is taking us

by Jing Garcia -- because the mind is a terrible thing to taste.

Originally posted on: techsabado.com/2025/10/25/burning-chrome-from-hype-to-reality-where-ai-is-taking-us/

Artificial intelligence is no longer science fiction. It has left the ivory towers of research labs and the glossy brochures of Silicon Valley and is now embedded in our phones, our banks, in online customer services, in social media and even in politics. The question is not whether AI is here—it is. The question is whether the world is prepared to handle the consequences of letting machines think for us.

So when did it start? The birth of artificial intelligence is often traced to the summer of 1956, when a group of computer scientists gathered at Dartmouth College to imagine machines that could reason. John McCarthy, Marvin Minsky, Claude Shannon and Nathaniel Rochester may not have known it then, but they planted the seeds of a revolution that is only now bearing its strangest fruit.

In the decades that followed, AI suffered growing pains. The 1970s AI winter showed how fragile funding and public trust could be when machines failed to live up to their hype. But the 2010s marked a renaissance: neural networks, once dismissed as clumsy, suddenly became supercharged with big data and powerful graphics processors. That moment birthed today’s generative AI, with systems like OpenAI’s ChatGPT and Anthropic’s Claude, which can draft articles, generate images, and even write code.

Stanford University’s 2025 AI Index records how far we’ve come, noting that inference costs have dropped dramatically and efficiency gains have widened access to advanced tools. The technology has grown so fast that regulators, educators and workers are scrambling to keep pace.

What AI really is—and isn’t

It helps to get the definitions straight. AI is the umbrella term, the broad pursuit of building systems that mimic human intelligence. Inside that umbrella sits machine learning, where algorithms learn patterns from data instead of following rigid, hand-coded rules. And within that, there’s deep learning—stacking artificial neurons into many layers to extract meaning from raw text, images or audio.

Large language models, or LLMs, are a specific kind of deep learning system. They’re trained on massive text datasets and use transformer architectures to predict what comes next in a sentence. That simple trick—predicting the next word—produces an uncanny ability to answer questions, summarize documents, and draft content. But don’t be fooled: these systems don’t understand meaning the way humans do. They remix patterns, and their output reflects the biases and gaps of their training data.

Nonetheless, no one denies AI’s promise. In medicine, AI tools are already predicting disease risks and helping discover new drugs. The World Health Organization in 2023 issued guidelines for using large multimodal models in healthcare, recognizing both the promise and the ethical landmines. In finance, AI screens for fraud and stress-tests portfolios. In manufacturing, computer vision detects defects before they spiral into costly recalls.

The benefits are clear: faster insights, lower costs, and augmented human creativity. AI can help scientists shift from slow hypothesis-driven discovery to data-driven breakthroughs. A review in Nature Reviews Physics this year even argued that AI is reshaping the scientific process itself.

But the catch is just as clear. These same systems can fabricate lies with the same fluency as facts. They can amplify biases and stereotypes. They can generate realistic deepfakes that threaten elections and destabilize societies. UNESCO has warned that AI can distort historical memory if misused—a sobering reminder in an era of viral misinformation.

Are our jobs on the line?

The International Monetary Fund has sounded the loudest alarm on labor. It estimates that around 40 percent of jobs worldwide are exposed to AI, with advanced economies seeing exposure levels closer to 60 percent. The nuance is important: some of those jobs will be augmented, not destroyed. But history teaches us that automation tends to accelerate inequality, especially during recessions when firms are quick to cut costs.

Routine cognitive jobs—customer service, paralegal research, and even journalism—are on the firing line. The worry is not just about lost paychecks but about hollowed-out career ladders, where entry-level roles vanish and workers can’t climb into more skilled positions.

In addition, there is also the issue of infrastructure. These models don’t run on magic—they run on electricity, water and servers packed into sprawling data centers. The International Energy Agency projects that data-center electricity demand could more than double by 2030, with AI as the primary driver.

This isn’t just about cost. It’s about whether the grid can keep up. Already, local governments in the United States are wrestling with AI companies over water rights for cooling servers. The cloud may be vast, but it isn’t infinite. If AI keeps scaling at today’s rate, the bottleneck will be power itself.

Governments are also not sitting idle. The European Union’s AI Act entered into force in August 2024, laying out bans on certain high-risk uses and stricter rules for so-called “foundation models.” The United States, slower as usual, has leaned on the National Institute of Standards and Technology’s voluntary AI Risk Management Framework. The OECD updated its principles in 2024, emphasizing transparency and accountability.

But rules on paper are not rules in practice. The real test will be enforcement. Tech companies, predictably, lobby for self-regulation. Civil society groups demand stronger checks. And in between are workers, students and consumers left wondering who actually holds the leash on this runaway dog.

The specter of sentience

Then there is the philosophical question that refuses to die: will AI become sentient?

Scientists remain skeptical. Editorials in journals from Science to Nature Machine Intelligence stress that today’s systems have no consciousness, no subjective experience, no awareness. They are powerful statistical parrots. To call them persons is premature. To treat them as dangerous tools is prudent.

Still, perception matters. When users mistake fluency for understanding, they can over-trust these systems, delegating decisions that ought to remain human. The danger is less about AI “waking up” than about humans falling asleep at the wheel.

Meanwhile, some technologists whisper about the next frontier: what happens when AI meets quantum computing? In theory, quantum systems could accelerate the matrix math at the heart of AI, making training and inference exponentially faster. In practice, quantum computers remain noisy and limited. Reviews in Nature Reviews Physics and other journals caution against overhyping quantum AI, though they admit niche applications in chemistry and finance may appear sooner.

If that marriage ever materializes, the computing power could be staggering. But that’s a future problem, and the present is messy enough.

A choice for humanity

The real question is whether the world is surrendering to AI. Adoption numbers suggest not surrender but integration. McKinsey’s 2025 global survey shows most large firms are deploying generative AI, but they are also building governance frameworks and workforce training in parallel.

This isn’t capitulation. It’s co-evolution. The tools are here to stay, but societies still have choices about how they are used. We can embed AI with oversight, require provenance standards for media, and design human-in-the-loop systems. Or we can drift into over-reliance and regret.

In the end, the future of AI is not a technological question. It is a human one. We already know AI will be powerful. The unknown is whether we will use it wisely.

The winners of this era will not be the firms with the biggest models or the most GPUs. They will be the societies that balance innovation with discipline, efficiency with ethics, and power with responsibility. That means documenting data, monitoring models, training workers, and putting humans in the loop where it counts—especially in medicine, law and governance.

For all the talk of singularity and machine consciousness, the real story is more mundane. AI will become ambient, embedded into daily life like electricity or the internet. And just like those earlier revolutions, it will both empower and endanger, depending on how we shape it.

If we choose well, AI could help humanity tackle climate change, disease and poverty. If we choose poorly, it could widen inequality, destabilize democracies, and exhaust the planet’s resources.

In short: the future of AI is the future of mankind. Whether it is liberation or surrender depends on us.

Originally posted on: techsabado.com/2025/10/25/burning-chrome-from-hype-to-reality-where-ai-is-taking-us/