Skip to main content
Research
Weekly Roundup5 min read

5 Things in AI This Week Worth Your Time — March 31, 2026

Axios gets backdoored on NPM, Oracle fires 30,000 people by email, Claude Code's source leaks through a sourcemap, Ollama goes native on Apple Silicon, and Microsoft tells you Copilot is for entertainment only.

AuthorAbhishek Sharma· Head of Engg @ Fordel Studios
5 Things in AI This Week Worth Your Time — March 31, 2026

Five stories. No filler. Here is what actually mattered in AI and engineering this week.

···

What happens when Axios — the HTTP library in half the internet — gets compromised?

Malicious versions of Axios appeared on NPM this week, shipping a remote access trojan to anyone who installed or updated without pinned dependencies. This is not a theoretical supply chain risk paper. This is the library your backend calls every time it talks to an external API, compromised in the registry that 17 million developers pull from daily.

If you are not running lockfile audits in CI, you are choosing to find out about these things from your incident channel instead of your pipeline. The attack surface is not your code — it is every transitive dependency your code trusts implicitly. We wrote about this pattern two weeks ago when LiteLLM got hit. The lesson has not changed: pin your versions, audit your lockfiles, and treat npm install as a security-critical operation. Because it is.

17M+Daily NPM downloads for AxiosOne of the most depended-upon packages in the JavaScript ecosystem
···

Is Oracle's 30,000-person layoff a strategy or an admission?

Oracle cut 30,000 jobs this week. Not through meetings or manager conversations — through a 6 AM email. The stated reason is the usual: AI efficiency, cloud transformation, operational streamlining. The real signal is that Oracle is betting it can replace a significant chunk of its workforce with automation and not lose delivery capacity.

Here is my take: the number does not scare me. What scares me is the method. If you cannot invest the basic human decency of a conversation when you fire someone, you have already told your remaining employees exactly how much you value them. The companies that will win the AI transition are the ones that use automation to make their people more effective, not the ones that use it as a justification for a spreadsheet-driven purge. Oracle is optimizing for next quarter's earnings call. That is not a strategy.

The companies that win the AI transition use automation to make people more effective — not as justification for a spreadsheet-driven purge.
Abhishek Sharma
···

How did Claude Code's entire source code end up on Hacker News?

Anthropic shipped Claude Code as a minified NPM package. Standard practice. Except they also shipped the sourcemap file, which lets anyone reconstruct the original source. A developer found it, read through the entire codebase, and posted a detailed breakdown on Hacker News. No hack. No exploit. Just a forgotten .map file in a published package.

This is simultaneously embarrassing and completely unsurprising. Every team that ships to NPM has had this near-miss. The interesting part is not the leak itself — it is what people found inside. Claude Code's architecture is well-structured, its system prompts are thoughtful, and the agentic patterns are solid engineering. Anthropic accidentally gave the community a masterclass in building AI dev tools. The lesson for everyone else: add sourcemap exclusions to your publish config. Today.

Takeaways from the Claude Code leak
  • Always check .npmignore or files[] in package.json before publishing
  • Sourcemaps in production packages are a common oversight — automate the check
  • The leaked architecture validates patterns many teams are independently discovering
  • Anthropic handled it well by not panicking — the code quality spoke for itself
···

Why does Ollama on MLX matter for local AI development?

Ollama announced MLX-native inference on Apple Silicon this week, in preview. Previously, Ollama on Mac used llama.cpp which works but does not fully exploit the unified memory architecture of M-series chips. MLX, Apple's own ML framework, does. Early benchmarks show meaningful speedups for inference on M3 and M4 hardware.

This matters because local inference is not a hobby project anymore. When you can run a 70B parameter model on a MacBook Pro at usable speeds, the calculus changes for development workflows, air-gapped environments, and privacy-sensitive applications. If you are building AI features and your development loop requires an API call to test every prompt change, you are slower than you need to be. Local models are the new localhost.

70BParameter models now runnable on MacBook ProMLX's unified memory approach eliminates the CPU-GPU transfer bottleneck
···

Did Microsoft just tell you not to trust Copilot?

Microsoft updated Copilot's terms to include language that it is "for entertainment purposes only." Read that again. The tool they are selling to enterprises as a productivity multiplier, the tool integrated into Office 365 and Azure, the tool they pitch as your AI-powered coding assistant — is, legally speaking, entertainment.

This is not Microsoft being humble. This is the legal team building a liability firewall while the sales team promises transformation. Every enterprise buyer should read the terms of service for their AI tools. Not the marketing page. The actual terms. Because when your Copilot-generated code introduces a bug in production, Microsoft has already told you — in writing — that it was just for fun. The gap between how AI tools are marketed and how they are legally disclaimed is the most important story in enterprise software right now. Nobody is talking about it.

When your Copilot-generated code introduces a production bug, Microsoft has already told you — in writing — that it was just for fun.
···

That's the week. See you Monday.

Loading comments...