Google teams up with SpaceX

ALSO: Cache Writing Policies

Welcome back!

Do you remember the story of the tortoise and the hare? This week's problem can be solved by using the strategy the tortoise used to catch up with the hare.

Today we will cover:

  • Linked List Cycle problem

  • Cache Writing Policies

Read time: under 4 minutes

CODING CHALLENGE

Linked List Cycle

Given head, the head of a linked list, determine if the linked list has a cycle in it.

There is a cycle in a linked list if there is some node in the list that can be reached again by continuously following the next pointer. Internally, pos is used to denote the index of the node that tail's next pointer is connected to. Note that pos is not passed as a parameter.

Return true if there is a cycle in the linked list. Otherwise, return false.

Example 1:

Input: head = [3,2,0,-4], pos = 1
Output: true
Explanation: There is a cycle in the linked list, where the tail connects to the 1st node (0-indexed).

Example 2:

Input: head = [1,2], pos = 0
Output: true
Explanation: There is a cycle in the linked list, where the tail connects to the 0th node.

Solve the problem here before reading the solution.

PRESENTED BY HUBSPOT

HubSpot Dev Platform updates from Spring Spotlight

See what's new for the HubSpot Developer Platform! Ship faster with AI coding tools, build MCP-powered AI connectors, run serverless functions with support for UI extensions, and use date-based versioning to streamline roadmap planning.

SOLUTION

To detect a cycle in a linked list, we'll use the Floyd's Cycle Finding Algorithm, also known as the "tortoise and hare" algorithm.

We'll use two pointers: a slow pointer and a fast pointer. The slow pointer moves one step at a time, while the fast pointer moves two steps.

The main idea is that if there's a cycle, the fast pointer will catch up to the slow pointer. If there's no cycle, the fast pointer will reach the end of the list.

The time complexity of this solution is O(n), where n is the number of nodes in the linked list.

HEARD ABOUT MINTLIFY?

One editor for writers, developers, and agents

Most doc tools make you choose: accessible for writers, or git-native for developers. Mintlify's editor does both. Writers get WYSIWYG editing, developers keep their git workflow, and AI agents contribute via MCP. Every change syncs both ways. Your whole team, in one place.

SYSTEM DESIGN

Cache Writing Policies

Imagine you’re running a large e-commerce platform where millions of users are constantly checking product prices and inventory. Every time a user requests information, your system needs to fetch it from the main database. Database queries are slow, and with millions of requests, your system becomes frustratingly sluggish. This is where caching comes in. By storing frequently accessed data in a faster, in-memory storage (cache), you can serve requests much faster. Instead of hitting the database every time, your system first checks the cache. If the information is not found in the cache, then it queries the database. While reading from a cache is straightforward, managing data updates in a cache can be challenging. This is where cache writing policies become important.

Let's start with the simplest writing policy: Write-Through. In this approach, when data needs to be updated, we write it to both the cache and the main database simultaneously. Only when both writes are complete does the system confirm the update. This policy ensures that the cache and database stay perfectly synchronized, making it ideal for systems where data consistency is critical, like financial applications or inventory management. But Write-Through has a significant drawback. Every write operation must wait for both the cache and database updates to complete, which can make write operations slower.

To address the performance limitation of Write-Through, some systems use Write-Back (also called Write-Behind). With Write-Back, data is initially written only to the cache, and the system immediately confirms the update. The modified data is written to the database later, usually in batches. This makes write operations much faster since they don't have to wait for the slower database write. Write-Back is perfect for systems that need high write performance, like real-time analytics or gaming applications. But there's a risk. If the system crashes before cached data is written to the database, you could lose updates.

Write-Around is another policy that takes a different approach. When updating data, Write-Around bypasses the cache completely and writes directly to the database. This policy works well when the updated data isn't likely to be read again soon, like log entries. The downside is that if the same data is needed again quickly, it must be fetched from the database, causing a cache miss. But once we hit the database, we’ll store this data in the cache so that we don’t have another cache miss.

FEATURED COURSES

5 Courses of the Week

✅ IBM AI Engineering Professional Certificate: Get job-ready in under 4 months by building deep learning models with PyTorch and TensorFlow, plus GenAI apps with RAG and LangChain.

✅ AI Engineering Specialization: Build AI-powered apps using the OpenAI API, embeddings, vector databases, and LangChain through hands-on projects like a travel agent and personal assistant.

✅ Generative AI with Large Language Models: Learn how LLMs work under the hood, including transformer architecture, fine-tuning, and deployment, in this course built with AWS.

✅ Prompt Engineering Specialization: Learn prompt patterns and advanced techniques for working with LLMs like ChatGPT from Dr. Jules White at Vanderbilt.

✅ IBM RAG and Agentic AI Professional Certificate: Build production RAG pipelines and autonomous AI agents using LangChain, LangGraph, CrewAI, and Model Context Protocol (MCP).

NEWS

This Week in the Tech World

Google and SpaceX Eye Orbital Data Centers: Google is in advanced talks with SpaceX to launch AI data centers into orbit, per the WSJ. SpaceX is pitching orbital infrastructure as the lowest-cost option for AI compute ahead of its planned $1.75T IPO.

npm Supply Chain Attack: A Mini Shai-Hulud campaign compromised npm packages tied to Mistral, UiPath, and TanStack, targeting CI credentials and dependency chains. The incident is another reminder that open-source dependencies remain a soft spot in modern build pipelines.

Google Reports First AI-Discovered Zero-Day: Google's Threat Intelligence Group disclosed the first known case of hackers using AI to discover and weaponize a software flaw. The exploit was blocked, but the case signals AI is lowering the bar for sophisticated cyberattacks.

OpenAI Ships New Realtime Voice Models: OpenAI released GPT-Realtime-2, GPT-Realtime-Translate, and GPT-Realtime-Whisper in the API. The trio adds GPT-5-class reasoning, live multilingual translation, and streaming transcription for voice-driven apps.

GitLab Cuts Jobs in AI Pivot: GitLab announced layoffs as part of a restructuring to free up resources for AI agent priorities, and will exit several countries. Even developer-tool platforms are reorganizing their teams around agentic workflows.

Apple Brings E2E Encryption to RCS: Apple's iOS 26.5 beta enables end-to-end encryption for RCS messaging through supported carriers, on by default. The move narrows the privacy gap between iMessage and cross-platform texting on Android.

Cerebras Upsizes IPO to $4.8B: AI chip startup Cerebras raised its IPO price range to $150–$160 per share from $115–$125, now targeting up to $4.8B. Investor appetite signals the AI hardware boom is supporting a new wave of public chip listings.

Foxconn Hit by Ransomware: A group called Nitrogen claims to have stolen 8TB of data from Foxconn tied to Apple, Google, Dell, and Nvidia, including product schematics. The breach exposes a key weak point in the global electronics supply chain.

OpenAI Debuts Daybreak Security AI: OpenAI launched Daybreak, combining GPT-5.5-Cyber and Codex Security to help organizations detect and patch vulnerabilities faster. The tool directly competes with Anthropic's Claude Mythos in AI cyber-defense.

BONUS

Just for laughs 😏

HELP US

👋 Hi there! We are on a mission to provide as much value as possible for free. If you want this newsletter to remain free, please help us grow by referring your friends:

📌 Share your referral link on LinkedIn or directly with your friends.
📌 Check your referrals status here.

YOUR FEEDBACK

What did you think of this week's email?

Your feedback helps us create better emails for you!

Login or Subscribe to participate in polls.

Until next time, take care! 🚀

Cheers,